Mensch und Computer 2024 (MuC ’24), pp. 1-15
September 2024 · DOI: 10.1145/3670653.3670660
Explaining the mechanisms behind model predictions is a common strategy in AI-assisted decision-making to help users rely appropriately on AI. However, recent research shows that the effectiveness of explanations depends on numerous factors, leading to mixed results, with many studies finding no effect or even an increase in overreliance, while explanations do improve appropriate reliance in other studies. We consider the factor of decision difficulty to better understand when feature-based explanations can mitigate overre-liance. To this end, we conducted an online experiment (N = 200) with carefully selected task instances that cover a wide range of difficulties. We found that explanations reduce overreliance for easy decisions, but that this effect vanishes with increasing decision difficulty. For the most difficult decisions, explanations might even increase overreliance. Our results imply that explanations of the model's inner workings are only helpful for a limited set of decision tasks where users easily know the answer themselves.
Stichworte: explainable AI, overreliance, human-AI decision-making, AI-assisted decision-making, decision difficulty, online experiment