You Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but Not for Hard Ones

Zelun Tony Zhang, Felicitas Buchner, Yuanting Liu and Andreas Butz

Mensch und Computer 2024 (MuC ’24), pp. 1-15

September 2024 · doi: 10.1145/3670653.3670660

abstract

Explaining the mechanisms behind model predictions is a common strategy in AI-assisted decision-making to help users rely appropriately on AI. However, recent research shows that the effectiveness of explanations depends on numerous factors, leading to mixed results, with many studies finding no effect or even an increase in overreliance, while explanations do improve appropriate reliance in other studies. We consider the factor of decision difficulty to better understand when feature-based explanations can mitigate overre-liance. To this end, we conducted an online experiment (N = 200) with carefully selected task instances that cover a wide range of difficulties. We found that explanations reduce overreliance for easy decisions, but that this effect vanishes with increasing decision difficulty. For the most difficult decisions, explanations might even increase overreliance. Our results imply that explanations of the model's inner workings are only helpful for a limited set of decision tasks where users easily know the answer themselves.

subject terms: explainable AI, overreliance, human-AI decision-making, AI-assisted decision-making, decision difficulty, online experiment