Explaining what Kaspar can see

Contributed by: MMousavi
Characteristics:

Trust Type: Value-based; Interaction: Conversation; Stage: Changing; Risk: Task Failure; System: Robotic-Non-AV; Test Environment: ITL-Audio-Visual; Measurement: Self-Reported; Pattern: None;

Description

Pariticipants are shown a video of a humanoid robot - Kaspar - playing a game with a child where it asks the child to show it an object. When Kaspar can't see the object, it gives the reason, e.g., 'I cannot see it because you are holding it too high' or 'this is not the animal I have asked to see'. The child then has an opportunity to change how or what they show to Kaspar. After seeing the video, adult participants are asked several questions about the explanation Kaspar provided including whether ``this explanation lets me judge when I should trust and not trust Kaspar.

Commentary

Kaspar is designed to interact with children with autism but the usecase examines not only how successful Kapar is from a communication POV. It examines which types of causal explanation are likely to increase trust and also whether that trust translates to a behavioural response.

Original purpose

Assess from adult responses whether (and which type of) explanations are likely promote trust and whether that trust translates to improved mediation between robot and child.

RRI issues

Special care must always be taken when an experiment involves children.

Source

Araujo, Hugo, et al. "Kaspar Causally Explains." International Conference on Social Robotics. Cham: Springer Nature Switzerland, 2022.

Back