Clinical decision support

Contributed by: YangLu
Characteristics:

Trust Type: Competency-based; Interaction: Collaboration; Stage: Mis-calibrated; Risk: Physical; System: Embedded; Test Environment: ITL-Immersive; Measurement: Self-Reported; Application Domain: Participants need special skills; Aspects are task-specific; Pattern: Reliability calibration;

Description

Participants are required to classify prescriptions as either confirmed or rejected with the help of a web-based tool - a mock-up built on templates and interfaces familiar to participants from their regular decision-making tasks. Clicking on a prescription displays the patient profile along with the AI's recommendation (accepted or rejected) and an explanation to help the participant understand the rationale behind the AI's decision.

Commentary

Participants are medical professionals. The simulated web-based tool provides a realistic user experience akin to operating an actual system. Presence of explanation (of various classes) can introduce risk of over-reliance. Four classes of explanations were tested: local, example-based, counterfactual, and global.

Original purpose

To evaluate how five types of explanations (No explanation, Local, Example-based, Counterfactual, Global explanations) accompanied by the AI's recommendation outcome (correct and incorrect) affect the user's trust in the AI (increase or decrease) To understand user requirements for interfaces used for delivering different explanation types (Local, Example-based, Counterfactual, Global explanations)

RRI issues

Transparency and Explainability: The use case highlights that the AI tool provides explanations for its decisions, which is crucial for transparency and helps users understand the AI’s rationale.

Source

Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941.

Back