While artificial intelligence (AI) holds promise for addressing societal challenges, issues of exactly which tasks to automate and to what extent to do so remain understudied. We approach this problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to AI. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life, and administer a survey based on our proposed framework. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Among the four factors, trust is the most correlated with human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of AI automation across tasks. We hope this work encourages future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development.
Brian Lubars (University of Colorado Boulder)
Masters student at University of Colorado Boulder, interested in human-centered machine learning and computational social science.
Chenhao Tan (University of Colorado Boulder)
Related Events (a corresponding poster, oral, or spotlight)
2019 Poster: Ask not what AI can do, but what AI should do: Towards a framework of task delegability »
Thu Dec 12th 05:00 -- 07:00 PM Room East Exhibition Hall B + C