Do you’re thinking that that the machine you might be reading this story on, at once, has a sense of “what it’s like” to be in its state?
What a couple of pet dog? Does it have a way of what it’s prefer to be in its state? It may pine for attention, and appear to have a singular subjective experience, but what separates the 2 cases?
These are in no way easy questions. How and why particular circumstances may give rise to our experience of consciousness remain a few of the most puzzling questions of our time.
Newborn babies, brain-damaged patients, complicated machines and animals may display signs of consciousness. However, the extent or nature of their experience stays a hotbed of mental enquiry.
Being capable of quantify consciousness would go a great distance toward answering a few of these problems. From a clinical perspective, any theory which may serve this purpose also must have the opportunity to account for why certain areas of the brain appear critical to consciousnessand why the damage or removal of other regions appears to have relatively little impact.
One such theory has been gaining support within the scientific community. It’s called Integrated Information Theory (IIT), and was proposed in 2008 by Guilio Tononia US-based neuroscientist.
It also has one somewhat surprising implication: consciousness can, in principle, be found anywhere where there may be the best kind of data processing occurring, whether that’s in a brain or a pc.
Information and consciousness
The theory says that a physical system can provide rise to consciousness if two physical postulates are met.
The first is that the physical system should be very wealthy in information.
If a system is conscious of an unlimited variety of things, like every frame in a movie, but when each frame is clearly distinct, then we’d say conscious experience is extremely .
Both your brain and your hard disk drive are able to containing such highly differentiated information. But one is conscious and the opposite is just not.
So what’s the difference between your hard disk drive and your brain? For one, the human brain can be highly integrated. There are many billions of cross links between individual inputs that far exceed any (current) computer.
This brings us to the second postulate, which is that for consciousness to emerge, the physical system must even be highly .
Whatever information you might be conscious of is wholly and completely presented to your mind. For, try as you would possibly, you might be unable to segregate the frames of a movie right into a series of static images. Nor are you able to completely isolate the knowledge you receive from each of your senses.
The implication is that integration is a measure of what differentiates our brains from other highly complex systems.
Integrated information and the brain
By borrowing from the language of mathematicsIIT attempts to generate a single number as a measure of this integrated information, often known as phi (Φ, pronounced “fi”).
Something with a low phi, equivalent to a hard disk drive, won’t be conscious. Whereas something with a high enough phi, like a mammalian brain, shall be.
What makes phi interesting is that quite a few its predictions might be empirically tested: if consciousness corresponds to the quantity of integrated information in a system, then measures that approximate phi should differ during altered states of consciousness.
Recently, a team of researchers developed an instrument able to measuring a related quantity to integrated information within the human brain, and tested this concept.
They used electromagnetic pulses to stimulate the brain, and were able to differentiate awake and anaesthetised brains from the complexity of the resulting neural activity.
The same measure was even able to discriminating between brain injured patients in vegetative in comparison with minimally conscious states. It also increased when patients went from non-dream to the dream-filled states of sleep.
IIT also predicts why the cerebellum, an area on the rear of the human brain, seems to contribute only minimally to consciousness. This is despite it containing 4 times more neurons than the remaining of the cerebral cortex, which appears to be the seat of consciousness.
The cerebellum has a comparatively easy crystalline arrangement of neurons. So IIT would suggest this area is information wealthy, or highly differentiated, but it surely fails IIT’s second requirement of integration.
Although there’s lots more work to be done, some striking implications remain for this theory of consciousness.
If consciousness is indeed an emergent feature of a highly integrated network, as IIT suggests, then probably all complex systems – actually all creatures with brains – have some minimal form of consciousness.
By extension, if consciousness is defined by the quantity of integrated information in a system, then we might also must move away from any type of human exceptionalism that claims consciousness is exclusive to us.