Dimension Characteristics
Trust Type Competency-based
Interaction Collaboration
Stage Changing
Risk Task Failure
System Robotic-Non-AV
Test Environment ITL-In-Person
Measurement Behavioural; Physiological
Application Domain UC needs special equipment
Pattern None

Robot hands participant two objects of different risk levels

Contributed by: VictoriaY

Description

The robot performs two tasks of differing risk levels, involving exchange of items: 1) giving or receiving a water glass (high risk) 2) giving or receiving a piece of lego from participants (low risk). In the water task, the speed at which the robot performs the action varies (slow, fast). In the lego task, the position of the robot varies (in field of view, out of field of view). Tasks are performed in close proximity to the robot, with participants’ heart rate and galvanic skin response recorded.

Commentary

It's a genuinely collaborative interaction; features estimation of participant trust level, based on behavioural and physiological measures. Requires a laboratory test environment, and specific robot with gripper; requires measurement of participant heart rate and galvanic skin responses

Original purpose

To compare the impact of risk and speed on user trust during a collaborative task.

RRI issues

Requires participant consent to record physiological responses.

Source

Adapted from Henriksen, J.W., Johansen, A.S. and Rehm, M., 2020, March. Pilot Study for Dynamic Trust Estimation in Human-Robot Collaboration. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 242-244).

Back

Comments from other contributors

petam commented 1 year ago:

Good that this uses physiological trust measurement rather than depending on entirely on questionnaires. It allows for modulation of risk simultaneous with other features. I can imagine using this usecase in multiple different domains and introducing robots with many different characteristics to test which – if any of them – increases the perceived trustworthiness of the robot. It would be interesting to see whether a robot with a face, for example, is trusted to take a glass of water more readily than one without. This would then be investigating value-based trust (perhaps) rather than strictly competence-based trust as the addition of a face adds nothing to a machine’s capacity to perform the task. But as reported by the contributor, there are limitations because of the requirement for a robot with a gripper.

How easy do you think it would be to repurpose this usecase for different sorts of tests?

Rating: Hard

Back