AI-assisted airport luggage screening
Contributed by:
sachiniw
Characteristics:
Trust Type: Competency-based; Interaction: Influence; Stage: Changing; Risk: Task Failure; System: Embedded; Test Environment: ITL-Immersive; Measurement: Behavioural; Pattern: Reliability calibration;
Description
Participant played the role of airline luggage screener, assisted by an AI tool classifying x-ray images of passenger luggage. One group received no information about the AI-tool. The other group was provided with information about the system development, functions and it was a recently developed emergent tool whose credibility was not yet established. Participant is first shown the x-ray image. Next, the AI tool shows the diagnosis regarding the presence/absence of a knife. Then the participants provide their own diagnosis. Finally, the participants receive textual feedback on the accuracy of their diagnosis.
Commentary
Presumably, trust in a particular type of screening technology would be some function of the amount of prior knowledge the screener has about the system and its capabilities vis-a-vis the screener’s own ability to perform the task unaided. If such emergent systems generate “easy” errors that a screener would conceivably not have made (unaided), the screener is much less likely to trust the automated system.
Original purpose
To examine how much background information about an emergent system must be given to naïve users to engender appropriate trust and utilization.
RRI issues
Transparency. This study highlights the crucial role of transparency in fostering trust between users and new technologies. Providing users with background information, even when the system is relatively unproven, leads to better trust and utilisation
Source
P. Madhavan, Is ignorance bliss? role of credibility information and system reliability on user trust in emergent technologies, Adv Cogn Eng Neuroergon 11 (2014) 1532–1539.