Can science explain consciousness?

In a book published last fall1, Thomas Nagel defends the idea that science cannot explain consciousness – that the mind is a natural phenomenon which cannot be reduced to physical states of the brain. He also argues that evolutionary theory, or its current materialist version, is not sufficient to explain the appearance of the mind. My attention got drawn to the book by Kristina Musholt’s review in Science2.

Dualism, Descartes

Illustration by René Descartes of what he considered to be the dualism between mind and body.Photo from Wikipedia released in the public domain.

 

Although worth reading to see the state of the reflection in that branch of philosophy of mind, the book turns out to provide a very poor argument for its two central claims. Nagel starts by opposing two ideological stances that he labels inappropriately, in my view. On the one hand, there are “materialists” or “reductionists” in which he seems to include anyone who thinks that the laws of physics and the facts of biology can explain behaviors and the mind. On the other hand, he refers to those who oppose that view as “antireductionists”. There is nothing particularly reductionist in thinking that consciousness could be explained by the networks of neurons in our brain. To the contrary, I find that acknowledging the incredible complexity of those networks and to think that they could underlie our feelings and states may be the most antireductionist claim in the philosophy of mind. What appears as reductionism to me is the trap in which Nagel falls by taking every elements of consciousness that we can’t explain yet (and I’ll address some of those later) and stating that they simply belong to a non-material mind that is not understandable using the tools of science. Reducing the spectacular aspects of the mind to another reality that is not understandable, not physical and not observable other than by our own introspection and intuition does not solve any of the issues, it makes them worse. It is, however, precisely that magic trick which has been used by dualists for years and which is performed here in a new form, perhaps more in line with monism, but still invoking the existence of something that is “more than physical”.

Nagel then asks for an evolutionary explanation of why we are conscious. He does seem to recognize that evolution could, in theory, lead to the appearance of consciousness. He writes:

Selection for physical reproductive fitness may have resulted in the appearance of organisms that are in fact conscious, and that have the observable variety of different specific kinds of consciousness [...]

But then adds:

[...] but there is no physical explanation of why this is so-nor any other kind of explanation that we know of.

[...] To make facts of this kind intelligible, a postmaterialist theory would have to offer a unified explanation of how the physical and the mental characteristics of organisms developed together, and it would have to do so not just by adding a clause to the effect that the mental comes along with the physical as a bonus. [...] Explanation, unlike causation, is not just of an event, but of an event under a description. An explanation must show why it was likely that an event of that type occured.

The problem is that evolutionary theory is not necessarily a complete and deterministic equation. Saying that some feature of an organism has appeared due to biological evolution is one thing, saying that it was certain or highly probable to appear is another thing. The precise shape of our noses, for illustration, is not due to an evolutionary advantage that these specific shapes procure. There is a part of noise in the evolutionary processes that leads to the creation of some features out of pure randomness, and if the organisms that have those features survive they will simply be passed on to the next generation. This does not mean that there is no evolutionary advantage to having a nose – obviously it does play a role in breathing. The same thing goes for the mind – there may not be a specific reason why the mind has the characteristics that it has, but it might just turn out that it appeared with those characteristics and got passed on to the next generations. Now I am making that point simply to highlight the idea that every feature of our biology does not need to be explained as an obligatory and deterministic consequence of all possible evolutionary histories but that some things can be the way they are just because of our evolutionary history. There is something impossible in the kind of predictive power that Nagel demands from evolutionary theory. He would want it to explain why the mind had to evolve. I like to transfer the question to physical features to illustrate its flaws. Think about the fact that a lot of animals on the planet have four legs. Do they have four legs? Yes. Is there an evolutionary advantage to having legs? Yes. Did it have to be four legs? Not necessarily. Yet no scientist, nor Nagel I suppose, would state that quadrupedy is a non-physical feature that evolutionary biology can’t explain. Nevertheless he seems to be making exactly that statement for the mind.

Beside that, the very characteristics of what we call consciousness do not seem completely independent from our biological needs – in fact they seem to be highly in tune with our survival. Ever noticed how pain hurts very badly and how positive feelings make you want to repeat the experience? It does seem that the mind has characteristics that make us better adapted and in that idea might lie the evolutionary explanation that Nagel is looking for. Nagel does recognize that some of those characteristics could have an evolutionary explanation, and I’ll cover that later.

For now, although I disagree that there has to be a reason for the appearance of consciousness, I do think we can play the game of hypotheses and I do think it is likely that there may be multiple reasons indeed. “Why [is] the appearance of conscious organisms, and not merely behaviorally complex organisms, [...] likely?”, Nagel asks. Here are some possibilities.

First, to be efficient and act, our brain needs to represent the world and update its model of reality. It is this internal representation of the world that makes it such that we can close our eyes and still reach to objects. It is also due to this representation that when we hear the voice of someone behind us we can imagine a visual representation of his position, and even his identity if we know him. We have inside of our head a very detailed and complex model of the world and when we generate actions, we sometimes rely solely on that model rather than on inputs from the outside world (like the reaching in the dark example). It is not at all unlikely that part of our perceptual subjective experiences might simply be a good way that biology has found to make us negotiate our actions within that model of the world. By creating feelings of perception when we see things for real and when we imagine them, the brain might simply be using subjective experience as a good common language to link real world inputs and imagined perceptions. There is already a great deal of evidence that the parts of our brain activated when receiving inputs from the world are also activated during mental imagery, which makes this hypothesis at least plausible3,4. This is also the case for motor imagery and we know that brain damage impairs such skills, suggesting that they have a physical and neurobiological basis5.

There is, also, the possibility that subjective experiences could have evolved in response to social context. Perhaps someone really in pain, really happy or really annoyed is more convincing than someone who would simply generate behaviors without actually experiencing the feelings. Robert Trivers pointed out that self-deception might have been favored by evolution to better deceive others6. Can’t the same thing be said of feelings like pain? Is it really unlikely that we might have evolved a feeling of pain so we could send better “stop” signals to others? Is it really unlikely that the inner conviction that we are hungry might be a drive to gather more food or to incite more food sharing from our conspecifics and parents?

None of these possibilities are proven scientific facts but their simple plausibility renders the alternative view proposed by Nagel unnecessary until they get more scientific attention.

I have not yet mentioned, however, what constitutes the biggest problem with Mind and Cosmos. The problem is one that affects many other works on the philosophy of mind: the so-called unique properties of the mind that science is portrayed to have so much difficulty to explain are often poorly defined, and when trying to seek what they really mean, one realizes that they might very well have homologies with brain properties that have already been identified. After removing all those subjective experiences that may have evolutionary significance like pain and happiness which he recognizes might result from our evolutionary history, Nagel claims that other properties of the mind are problematic for materialism, including our ability for reasoning (mathematics for instance), logic and ethics.

Let’s consider abstract reasoning. Our ability to find truths, in short, would be unlikely to be explained by materialist evolutionary theory because the truths that have been identified in modern physics and mathematics are too complex to have been produced by a brain that evolved in prehistoric conditions.

This story depends heavily on the supposition of a biological origin of the capacity for nonperceptual representation through language, resulting in the ability to grasp logically complex abstract structures. In view of the mathematical sophisticiation of modern physical theories, it seems highly unlikely; but perhaps the claim could be defended.

There are many problems with this equation linking the sophistication of the great abstract human creations with an impossibility or unlikeliness that these creations could emerge from a biological brain. First, as argued previously, there doesn’t need to be an evolutionary explanation for everything that the brain does. The fact that it can do many different things might, however, reflect the dynamic environments in which we have evolved. In simple words, maybe evolution has programmed into the brain the ability to do anything or many things, not just hard-code specific behaviors in it. Secondly, the brain of any individual does not have any particular access to “truths”. For every mathematician who develops a single great mathematical equation, there is at least a thousand freaks who believe they made an extraterrestrial encounter or who think apocalypse is due next year. For every trained mathematician who ends up discovering something, there are hundreds of others who end up making errors and not discovering anything. It is through a social process, that of the scientific and academic system, that we identify those who produce the most useful equations and give them a job and the recognition they deserve. But looking back at it, it all seems like there is a part of random in that process and subsequent selection. The brain wasn’t programmed by evolution to seek abstract truths, it was programmed to be curious maybe, and to learn things, most likely. One does not need to invoke the existence of a non-physical aspect to the mind to explain how some people might end up being right and their theories might end up being selected as useful for mankind. For a more detailed view of how good ideas can spread through social networks, the reader might want to consult Daniel Dennett’s Darwin’s Dangerous Idea: Evolution and the meanings of life.

There are, on top of it, many reasons why the brain might have evolved some form of logical reasoning, even in prehistoric context. We know that even birds and monkeys are capable of some degree of numerical cognition7,8. We know that the social environment humans always lived in might favor skills such as knowledge attribution to specific individuals and reasoning on fictive scenarios to deal with others9. The dynamic evolution and learning of interacting individuals have been largely discussed and are a subject of current research10. There is in fact no reason to believe that our mental faculties cannot be explained by an evolutionary process – a biological and physical one.

There are other claims about value and intentions being other problematic characteristics, but again most of these questions are already being studied in current brain research. I might cover those claims in more details in a future post.

Finally, another argument that is brought by Nagel is the idea that rationality cannot be divided in small components like a computer separated in miniature transistors. This irreducibility of the mind and of rationality in particular, he argues, constitutes a big problem for the idea that it may be entirely explainable by neuronal networks. But there are other things that are not reducible in biology and that do not seem to require the kind of explanations that Nagel wants to develop for the mind. If you slice a heart in small dices, there is a point at which it is not a heart anymore, it’s just a bunch of cells that have no function because their normal biological structure has been destroyed. Yet I hear no one claiming that the heart is a non-physical entity that is not explainable by the materialistic version of evolutionary theory.

References

1. Thomas Nagel (2012) Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.

2. Musholt K. (2013) A Flawed Challenge Worth Pondering. Science 339:1277.

3. Zvyagintsev M, Clemens B, Chechko N, Mathiak KA, Sack AT, Mathiak K. (2013) Brain networks underlying mental imagery of auditory and visual information. European journal of neuroscience. doi: 10.1111/ejn.12140.

4. Zapparoli L, Invernizzi P, Gandola M, Verardi M, Berlingeri M, Sberna M, De Santis A, Zerbi A, Banfi G, Bottini G, Paulesu E. (2013) Mental images across the adult lifespan: a behavioural and fMRI investigation of motor execution and motor imagery. Experimental Brain Research 224:519-40.

5. Daprati E, Nico D, Duval S, Lacquaniti F. (2010) Different motor imagery modes following brain damage. Cortex 46:1016-30.

6. von Hippel W, Trivers R. (2010) The evolution and psychology of self-deception. Behavioral and Brain Science 34:1-16.

7. Pepperberg IM. (2012) Further evidence for addition and numerical competence by a Grey parrot (Psittacus erithacus). Animal Cognition 15:711-7.

8. Nieder A. (2012) Supramodal numerosity selectivity of neurons in primate prefrontal and posterior parietal cortices. Proceedings of the National Academy of Sciences of the United States of America 109:11860-11865.

9. Cosmides L. (1989) The logic of social exchange: has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition 31:187-276.

10. Gintis H. (2009) The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences.

One thought on “Can science explain consciousness?

  1. Pingback: Can science explain consciousness? | BrainFacts.org Blog | Philosophy Mojoham | Scoop.it

  2. Bravo! on a well written piece. I have yet to purchase, much less read, Nagel’s book, but do have it on my backlogged list. Being the bat, I mean, human, that I am, I have for some time now, been trying to see if I can pry my way into the heart of the ‘what’s going on here’ with setting such high standards set by some thinkers.

    I recall in one response paper by Bernard Baars, towards a paper which was similarly suggesting a request for such high standards, his essentially-so asking why the desired answer was set so impossible absurd high. Why is there this demand to posit a ‘nothing,’ and then demand that it be explained in terms of a ‘something,’ all the while disallowing the questionee to reject the premise that the notion of ‘a nothing’ usually connotes.

    I saw the review in Science, and elsewhere, as here, and jump on the band wagon. I tend to very much agree with the overall gist, and think that eventually, we will have a well-enough subtantiated body of understanding of the processes and their structured pathways, and so on, which amount to the state of having consciousness. We’ll never be able to put a slice of consciousness on a plate, or a piece of it in a petri dish, but it does not follow from that, that consciousness is some Platonic divine nothing which will forever escape our being able to explain its necessary substrate–holding AI on the side for the moment–and thus in that way, it, itself. Nice write up !

    Flag as inappropriate

Leave a Comment