CHItaly '21: 14th Biannual Conference of the Italian SIGCHI Chapter, pp. 18:1-18:5
July 2021 · doi: 10.1145/3464385.3464696
Decision support systems based on AI are usually designed to generate complete outputs entirely automatically and to explain those to users. However, explanations, no matter how well designed, might not adequately address the output uncertainty of such systems in many applications. This is especially the case when the human-out-of-the-loop problem persists, which is a fundamental human limitation. There is no reason to limit decision support systems to such backward reasoning designs, though. We argue how more interactive forward reasoning designs where users are actively involved in the task can be effective in managing output uncertainty. We therefore call for a more complete view of the design space for decision support systems that includes both backward and forward reasoning designs. We argue that such a more complete view is necessary to overcome the barriers that hinder AI deployment especially in high-stakes applications.