Timezone: »
The science of consciousness has made great strides by focusing on the behavioral and neuronal correlates of experience. However, correlates are not enough if we are to understand even basic facts, for example, why the cerebral cortex gives rise to consciousness but the cerebellum does not, though it has even more neurons and appears to be just as complicated. Moreover, correlates are of little help in many instances where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, pre-term infants, non-mammalian species, and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need not only more data, but also a theory of consciousness – one that says what experience is and what type of physical systems can have it. Integrated Information Theory (IIT) does so by starting from conscious experience itself via five phenomenological axioms of existence, composition, information, integration, and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience (a quale), and a calculus to evaluate whether or not a particular system of mechanisms is conscious and of what. Moreover, IIT can explain a range of clinical and laboratory findings, makes a number of testable predictions, and extrapolates to a number of unusual conditions. The theory vindicates some intuitions often associated with panpsychism - that consciousness is an intrinsic, fundamental property, is graded, is common among biological organisms, and even some very simple systems may have some of it. However, unlike panpsychism, IIT implies that not everything is conscious, for example aggregates such as heaps of sand, a group of individuals or feed-forward networks, such as deep learning networks. Also, in sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.
Author Information
Christof Koch (Allen Institute for Brain Science)
More from the Same Authors
-
2012 Poster: A System for Predicting Action Content On-Line and in Real Time before Action Onset in Humans – an Intracranial Study »
Uri M Maoz · Shengxuan Ye · Ian Ross · Adam Mamelak · Christof Koch -
2007 Poster: Predicting human gaze using low-level saliency combined with face detection »
Moran Cerf · Jonathan Harel · Wolfgang Einhäuser · Christof Koch -
2007 Demonstration: Predicting Human Gaze Using Low-level Saliency Combined with Face Detection »
Moran Cerf · Christof Koch -
2006 Poster: Graph-Based Visual Saliency »
Jonathan Harel · Christof Koch · Pietro Perona -
2006 Talk: Graph-Based Visual Saliency »
Jonathan Harel · Christof Koch · Pietro Perona