Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Metacognition in the Age of AI: Challenges and Opportunities

Performance-Optimized Neural Networks as an Explanatory Framework for Decision Confidence

Taylor Webb · Hakwan Lau


Abstract:

Previous work has sought to understand decision confidence as a prediction of the probability that a decision will be correct, leading to debate over whether these predictions are optimal, and whether they rely on the same decision variable as decisions themselves. This work has generally relied on idealized, low-dimensional modeling frameworks, such as signal detection theory or Bayesian inference, leaving open the question of how decision confidence operates in the domain of high-dimensional, naturalistic stimuli. To address this, we developed a deep neural network model optimized to assess decision confidence directly given high-dimensional inputs such as images. The model naturally accounts for a number of puzzling dissociations between decisions and confidence, suggests a novel explanation of these dissociations in terms of optimization for the statistics of sensory inputs, and makes the surprising prediction that, despite these dissociations, decisions and confidence depend on a common decision variable.