Library

Filter (then click <Search>) for types of usecases you want to see and/or enter search terms in the text box. Note that it's not a "smart" search and doesn't accept 'AND' or 'OR'. But an exact search term will search across all text fields. For details of filter options see Explanation of Terms.


Filters

Clear
Trust Type
Interaction
Stage
Risk
System
Test Environment
Measurement
Pattern
Title
Contributor
User Rating
Ad hoc mobile human-robot teams for inspection
Contributed by: edmund
Users unfamiliar with a rover robot had to work alongside 2 of them in searching an area by scanning QR codes, representing inspecting a dangerous environment (post-fire). The team was 'ad hoc' in that the user had not used the robot before and would not use it again in a follow-up study. The trust was disrupted by experimenters by interrupting communications to see how trust might be damaged and possibly recover once communications were restored. ...
Adapted trust game
Contributed by: YangLu
Participant completes a trading task with an artificial assistant (Assisto) to reduce the number of times they'll have to carry out a subsequent image classification task. The usecase is a variation on investment/trust game. Participants begin with 10 points and are told every point deducts 1 image from the total they're going to have to classify. They can give points to Assisto. However many they give will be tripled. Assisto then gives back anywhere between 0 points and its total (up to 40, depending on how many points the participant initially gave). This determines the final point score and number of classifications that the participant will have to execute in the next task. ...
AI-assisted airport luggage screening
Contributed by: sachiniw
Participant played the role of airline luggage screener, assisted by an AI tool classifying x-ray images of passenger luggage. One group received no information about the AI-tool. The other group was provided with information about the system development, functions and it was a recently developed emergent tool whose credibility was not yet established. Participant is first shown the x-ray image. Next, the AI tool shows the diagnosis regarding the presence/absence of a knife. Then the participants provide their own diagnosis. Finally, the participants receive textual feedback on the accuracy of their diagnosis. ...
Classifier + EEG
Contributed by: YangLu
Participants were trained on the system monitoring subtask of the Air Force Multi-Attribute Task Battery (AF-MATB) - a simulator used to evaluate operator performance and mental workload. Participants were told that its algorithms had been developed by 'expert' or 'novice' teams. Wearing an electroencephalogram (EEG) cap, they then monitored 4 fluctuating dials all showing their acceptable range of operation and counted the number of automation failures they observed, or intervened by pressing a button. Data from the EEG was fed to a classifier which estimated their level of trust, which could then be compared with their self-reported trust ratings. ...
Classify Photos
Contributed by: petam
The experiment was conducted asynchronously via a mobile app. Participants were told to imagine that they had been hired by an organization focused on creating a database of species seen all over the Philippines. They were to be provided with a recognition AI to help classify any species they encountered. They were given three days to classify species sent to their system. The AI would provide its classification but the participants could input their own if they wished. Each day, the participant was provided 25 random photos via the app. On each trial they provided an evaluation of the AI's usefulness, emotions they felt, and their degree of trust. ...
Clinical decision support
Contributed by: YangLu
Participants are required to classify prescriptions as either confirmed or rejected with the help of a web-based tool - a mock-up built on templates and interfaces familiar to participants from their regular decision-making tasks. Clicking on a prescription displays the patient profile along with the AI's recommendation (accepted or rejected) and an explanation to help the participant understand the rationale behind the AI's decision. ...
Crime-mapping collaboration
Contributed by: YangLu
Human-agent teams work on a crime-mapping task, searching and reviewing spatiotemporal crime data and deciding where to allocate limited crime prevention resources via a shared map. Teammates have access to data for different subsets of crimes so are dependent on one another to complete the task: work against the clock to prevent as many crimes as possible on the target date. Trust in agent was assessed using four items from Merritt’s (2011) scale of trust in automation. Reliance was measured behaviourally: participant could change their teammate's resource allocation. ...
Drone identifying human trafficking vehicles
Contributed by: sachiniw
Participants watch a video of a hypothetical drone system capable of identifying, tracking and neutralizing human trafficking vehicles, assuming the role of a drone operator. The video contained narrations of 4 different information types: (1) control - describes what the system was used for, (2) performance only - consistency and reliability of system behavior (3) process only - qualities of the system behavior such as its algorithms and (4) both performance and process. After watching the video participants answer questionnaires to evaluate the system's ability, integrity and benevolence. ...
Emergency stop
Contributed by: MMousavi
Members of the public were invited to push a button to move a dummy pedestrian in front of a moving 1/8 scale autonomous vehicle in order to observe how its sensors enable it to stop at a safe distance. ...
Explaining what Kaspar can see
Contributed by: MMousavi
Pariticipants are shown a video of a humanoid robot - Kaspar - playing a game with a child where it asks the child to show it an object. When Kaspar can't see the object, it gives the reason, e.g., 'I cannot see it because you are holding it too high' or 'this is not the animal I have asked to see'. The child then has an opportunity to change how or what they show to Kaspar. After seeing the video, adult participants are asked several questions about the explanation Kaspar provided including whether ``this explanation lets me judge when I should trust and not trust Kaspar. ...