Crime-mapping collaboration

Contributed by: YangLu
Characteristics:

Trust Type: Combination; Interaction: Collaboration; Stage: Mis-calibrated; Risk: Task Failure; System: Virtual; Test Environment: ITL-Immersive; Measurement: Behavioural; Self-Reported; Pattern: Reliability calibration;

Description

Human-agent teams work on a crime-mapping task, searching and reviewing spatiotemporal crime data and deciding where to allocate limited crime prevention resources via a shared map. Teammates have access to data for different subsets of crimes so are dependent on one another to complete the task: work against the clock to prevent as many crimes as possible on the target date. Trust in agent was assessed using four items from Merritt’s (2011) scale of trust in automation. Reliance was measured behaviourally: participant could change their teammate's resource allocation.

Commentary

Makes use of a configurable testbed - Computer-Human Allocation of Resources Testbed (CHART) - reusable to test other aspects of human-agent collaboration. Key aspects: highly engaging, meaningful performance metrics which can include monetary incentives, online or lab-based, tracks behavioural metrics, available to the research community. See link to more info.

Original purpose

To understand impact of agent transparency (high/low) and reliability (high/low) in the context of human-agent teams, specifically hypothesises: H1: Calibrated trust in (and subsequent reliance on) an agent will be higher when agent reliability is high. H2: When agent reliability is high, transparency will not have an effect on trust (or subsequent reliance). In contrast, when agent reliability is low, transparency will have a positive effect on calibrated trust and reliance. H3 (exploratory): Transparency will have no influence, while reliability will have a negative influence, on perceived workload. H4: When agent reliability is high, teammate valence will be higher than when reliability is low.

RRI issues

Fairness and Bias: An essential concern in AI-assisted decision-making, especially in applications like crime-mapping, is the potential for algorithmic bias. Transparency and Explainability: It is crucial that the logic and reasoning used by the AI agent to assist in resource allocation are transparent and understandable to users.

Source

Bobko, P., Hirshfield, L., Eloy, L., Spencer, C., Doherty, E., Driscoll, J., & Obolsky, H. (2023). Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems. Theoretical Issues in Ergonomics Science, 24(3), 310-334. https://www.tandfonline.com/doi/pdf/10.1080/1463922X.2022.2086644

Back