6. Non-Reductive Physicalism |
6.1 Multiple Realisability
Hilary Putnam has formulated an influential argument against the mind-brain identity thesis. If pain is identical to a physiological state, it must be identical to a specific physiological state. The reductive physicalist is committed to the consequence that all creatures that can feel pain have that aspect of their physiology in common. This is a very strong empirical claim. If pain is identical to c-fibres in my brain firing, for instance, then any creature capable of being in pain has c-fibres capable of firing in the appropriate circumstances. A human, a dog, a crocodile, an octopus, they all must have an aspect of their physiology in common, if they are all capable of feeling pain, and who would deny that they are? This counts even for creatures we have not yet encountered, as yet undiscovered species on Earth or extra-terrestrials. We may not be able yet to identify the aspect of physiology that all creatures capable of being in pain must share, but the identity theory guarantees that there is one to be found. Conversely, we’ll never discover a creature that can feel pain but lacks that physiological feature. The identity theory imposes a priori constraints on empirical discoveries. The restrictions reductive physicalism imposes on the nature of mental states are even more striking for more complex examples. Take my belief that there is an infinite number of prime numbers. According to the identity theory, it must be identical to some state of my brain and everyone who has that belief must also have exactly this brain state. You can come to have that belief in different ways: actually going through each step of a proof (and there are different ones), glancing at a sketch of a proof, being told that there is an infinite number of prime number by a trustworthy source. All these processes must result in the same brain state. This generalises. All persons who believe that cats are smarter than dogs or that democracy is not a good form of government but no one has come up with anything better must share the same brain state. This is not a refutation of reductive physicalism. It only shows it to be demanding almost beyond belief. Putnam’s alternative proposal is that mental states are multiply realisable. Mental states are connected to behaviour. Creatures that are in pain typically behave in a certain way. Creatures with nervous systems that may not have any specific feature in common can nonetheless behave similarly. The neurological states connected to pain behaviour in humans, dogs, crocodiles and octopuses may be different. This does not mean that they are not all in the same mental state when they show similar behaviour. The same mental state may be realised in different neurological states in organisms of different species, so that creatures with different nervous systems can nonetheless be in the same mental states. 6.2 Functionalism It is a very common phenomenon that different things can perform the same function. Mammal hearts and reptile hearts are different, but they are both hearts because they perform the same function in reptiles and mammals: to circulate blood. Anything that pumps blood through the arteries and veins of an organism is a heart, no matter what exactly its physiology is. Anything that does not do that is not a heart. The shape, materials and mechanisms of mousetraps vary a lot and maybe the only thing they all have in common is that they trap mice, which is what makes them all mousetraps. Anything that is designed to trap mice is a mousetrap, no matter how it is designed. Anything that does not trap mice isn’t a mousetrap. Putnam applied a similar observation to mental states. Mental states serve a certain purpose and are connected to the behaviour of organisms. This was also an important aspect of Lewis’s argument for the identity thesis. We often identify mental states by their causal role: the way it is caused in an organism and what it causes is essential to what a mental state is. Putnam uses this observation to argue for non-reductive physicalism. The idea behind the multiple realisability of mental states is that different physiological states can perform similar causal roles. This is the case across species, but maybe also hold within a species or even one individual organism. Mental states have a certain function in the causal workings of an organism. A functional state of a creature is one that is characterised by typically causing certain reactions or behavioural output as a response to certain environmental conditions or sensory input. Functionalism in the philosophy of mind is the view that mental states are certain functional states of creatures. Organisms with very different physiologies can have the same mental states, as they can be in the same functional states. It is unlikely that the functional role of mental states is determined by behavioural and sensory causes and effects in isolation of other mental states. This is plausible already for comparatively simple cases like pain, but even more so for pleasure or beliefs. Other mental states are amongst the typical causes and effects of mental states. It may also be that there is no sharp boundary between mental states, sensory inputs and behavioural outputs that are relevant to the functional role of a specific mental state and those that are not. It may be that only its place in the whole causal network of mental states and their connections to the environment determines the causal role, and thus the nature, of an individual mental state. Functionalism usually embraces holism. The whole description of the workings of an organism specifies the causal role of each individual mental state. Functionalism is a variety of physicalism, if it is required that mental states are realised by physical states. Functional states are realised in physical states, because certain physical states fit the specification of the causal roles of the functional states. This is a non-reductive physicalism, as the mental properties so understood are real properties in their own right, distinct from and irreducible to the physical properties they are realised in. It is token physicalism coupled with property dualism. As they are tied to the behaviour of an organism in its physical environment, we should expect that mental properties supervene on those physical ones. Physicalism is not essential to functionalism. Functionalism is consistent with substance dualism: nothing in what has been said about functional roles prevents it from being the case that your soul performs them, as long as your soul can stand in causal relations to the physical world. A physicalist functionalist can grant that, although in fact all mental properties are realised by physical properties, it is nonetheless possible that they are realised by non-physical properties. A functionalist could accept that spiritual beings like ghosts are logically possible or conceivable and could have beliefs and emotions, as long as they react in the appropriate ways to their environment, even though their mental properties would not be realised in physical properties. Other considerations such as the closure of physics may lead you to conclude that all functional states are physical states, but this is what they remain: considerations external to the core tenet of functionalism. The picture functionalism proposes of the mental is mechanistic, but nothing in something’s being a mechanism excludes it from being non-physical. Classical versions of reductive physicalism and substance dualism share the assumption that your mental states are entirely internal to you, be it that they are states of your brain or of your soul. The functionalist view of mentality is radically different: your mental states depend essentially on events external to you, via the causal relations you stand in to your environment. Just looking into a creature’s brain or soul need not enable us to determine what mental state it is in. Consequently, the environment a creature inhabits and the way it can engage with it limits the mental states it can have not just contingently, but essentially. You could not, for instance, implant the thought that the circle cannot be squared with ruler and compass into the mind of an early human. You would also have to implant many other thoughts and connected them causally to each other and the environment in the requisite way. Functionalism favours a holistic approach to the identity of mental states. The nature of an individual mental state is not determined in isolation, but depends on the workings of the whole functional system it forms part of. That raises a question. Any two creatures, no matter how similar, are different. I have different beliefs, desires, hopes and fears than you have. We do not react in exactly the same way to the same external stimulus. Strictly speaking, then, if the content of an individual mental state depends on the whole functional system it is part of, and strictly speaking my functional system and yours are different, the contents of our mental states must be different, too, no matter how similar they may appear to be. We may be able to explain this away by pointing out that functionalism is an account of how types of mental states are individuated and that, as we are members of the same species, our reactions are sufficiently similar and of the same kinds. The question is whether it is plausible that this claim can be extended to apply across species, where behaviour is significantly different. The motivation for the functionalist approach was to give an account of the nature of mental states that avoids a criterion of identity for them so strict that organisms with different physiologies cannot have the same mental states. The holism of the approach, however, appears to push the position towards the same problematic conclusion. 6.3 The Computational Theory of the Mind Functionalism is a mechanistic view of the mental. Mental states are certain internal states of an organism forming a causal network which as a whole is causally connected to sensory inputs and behavioural outputs. What goes on between the input and the output is a mechanism that carries out operations in response to the inputs to produce the outputs. The computational theory of the mind, building on Alan Turing’s work, aims to give a more detailed account of what the mind does by characterising it as carrying out calculations or computations. Turing’s work formed the basis of computer science. He proposed a precise definition of calculation or computation in terms of abstract machines now named after him. A Turing Machine can read information, operate on it and thereby manipulate it. We can think of the machine’s means of receiving information as a tape partitioned into squares. Information comes in a unified form, e.g. whether a square is empty or has a dot it in. The machine can move to the left and right along the tape and detect whether a dot is in a square or not, delete dots and write dots. The tape is arbitrarily long, but initially only finitely many (if any) squares have a dot in them. More precisely, a Turing Machine can do the following five things: 1. if there is a dot in a square, erase it, 2. if there is no dot in a square, write one, 3. move one square to the left, 4. move one square to the right, 5. halt. 3, 4 and 5 are carried out conditionally on whether there is a dot in a square, i.e. prefixed by either ‘If there is a dot in the square, then’ or ‘If there is no dot in the square, then’, e.g. if the square the machine is at contains a dot, move one square to the left. That’s all. This is only a sketch of Turing Machines. They can carry out a large number of mathematical operations, in particular addition and multiplication. We’ve represented their input as dots, but we can adapt the description to other inputs, as long as they are sufficiently uniform, for instance electrical impulses. Turing Machines can do a lot of things. Your computer is effectively a Turing Machine. Comparing what goes on in mental creatures between sensory input and behavioural output with some kind of calculation or computation neatly captures the thought that the mechanisms in the mind may be very different from creature to creature, even though they produce similar behavioural responses to similar environmental conditions. Different methods of calculation may produce the same result for the same input, and calculations typically do not produce an output immediately after receiving an input, but may produce various intermediate stages in the process of calculation. There are different ways of calculating additions and multiplications, but they give the same results for the same summands and factors. Turing was interested in artificial intelligence. Is it possible to build a machine that responds intelligently to questions or can convince a human interlocutor that it is itself human? This is called the Turing Test. Turing was optimistic that in the near future to when he was writing, intelligent machines would be built. He's proved wrong on the time frame, but there seems nothing in principle impossible about an intelligent machine in Turing’s sense. If there was such a machine, we can ask a further question: would it also possess a mind? The identity theory must answer in the negative: no matter how well a machine simulates human behaviour, it cannot have a mind, or even just thoughts or beliefs. If mental states are brain states, only things with a brain can have a mind. The computational theory of the mind can answer in the affirmative, at least if the machine is complex enough, but it is debatable whether it is committed to that answer. Many arguments and thought experiments have been put forward that aim to show that there must be more to human cognition than what can be carried out by a computer programme. Suppose someone, say John Searle, is given a job in a large, old fashioned archive with long corridors of filing cabinets. Searle receives cylindrical carriers containing rolled up sheets of paper through pneumatic messaging tubes. His job is to send back files from the cabinets. The sheets are covered in symbols and he has been given instructions that tell him where to find the files to send back in response to each message he receives. Searle knows nothing else about those symbols except that they guide him along the filing cabinets with the help of his instructions. Unbeknownst to Searle, the symbols are in fact Chinese, the papers he receives communications from Chinese speakers, and the files he sends back adequate responses. The speakers think they are conversing with another Chinese speaker. Searle argues that his actions could be carried out by a computer, but nonetheless, this does not result in any understanding of Chinese. What a Turing Machine can do very much depends on the inputs it can operate on and outputs it can produce. You can feed your computer a recipe for baking a cake, but unless you extend it in some way so that it can handle ingredients, presenting it with eggs, flour, milk, sugar and chocolate will not produce any cake. One response to Searle’s argument is that although there may be no understanding of Chinese going on in the scenario as Searle sets it up, there would be if it allowed for a richer way of interacting with the environment than just by responding to messages received. One option would be that the Chinese Room is connected to a robot that manoeuvres around Chinese speakers. Although then Searle would still not understand Chinese, the robot would. Another argument that Turing Machines cannot replicate all aspects of human cognition aims at the very heart of what they can do, no matter what their input or output. Computation has limits. Kurt Gödel showed if Peano arithmetic is consistent, then its consistency cannot be established within itself. The details are intricate, but a simple version goes like this. Consider the sentence G that says of itself that it is not provable in Peano arithmetic, i.e. G is the sentence ‘G is not provable in Peano arithmetic’. Suppose G is provable in Peano arithmetic. Then ‘G is not provable in Peano arithmetic’ is false. So if Peano arithmetic is consistent, that is only true sentences can be proved in it, then G is not provable in Peano arithmetic. But that is of course what G says. So G is true, but not provable in Peano arithmetic. What is crucial to Gödel’s proof is that G can be formalised within Peano arithmetic. This involves showing that it is possible to represent the predicate ‘It is provable in Peano arithmetic that’ and to express the self-referentiality of G within Peano arithmetic. This is far more complicated that this simple sketch suggests. In a sense to be made precise, Turing Machines cannot compute more than what is provable in Peano arithmetic, another result that requires proof for the argument to carry any weight. Thus, so the argument goes, as G is not provable in arithmetic, a Turing Machine cannot establish that it is true. We, however, can see that it is true, so there is something human minds can do that Turing Machines can’t do. If Turing Machines capture what it is for something to be a mechanism, then the conclusion is that the functioning of the mind cannot be explained entirely in mechanistic terms. There is little consensus on what, if anything, this argument shows. Gödel himself, although sympathetic to it, was cautious to stress that before any firm conclusions can be drawn, there would have to be significant advances in the philosophies of mind and mathematics so that the argument can be treated with the rigour it requires. We are certainly far from that. 6.4 Mental Causation Again: The Exclusion Argument The problem of mental causation once more rears its ugly head. According to functionalism, although mental properties are realised in physical properties, they are not identical to them. They are higher order properties of the lower order physical properties of organisms. How can they cause physical events? If every physical event has a sufficient physical cause, then the lower order physical properties cause the events following a creature’s being in a certain mental state. The physical realisers are the causes, not the mental states, which are different from them. The causal efficacy of mental states is excluded by the causal efficacy of their physical realisers. Mental states are not causally efficacious. To avoid epiphenomenalism, the non-reductive physicalist of the functionalist kind has to reject the closure of physics or accept overdetermination of physical events by physical as well as mental causes. The latter is unsatisfactory. It may not be coherent with physics, and in any case, if the mental cause of an event is not necessary to bring it about, as the physical cause suffices, would such a phenomenon deserve to be called ‘mental causation’? Everything would be just the same if the ‘mental cause’ was absent. Rejecting the closure of physics allows downward causation: higher-level functional properties can have causal effects on lower level physical properties. Physics deals only with the latter, not the former. Physics can therefore neither explain nor predict everything that happens even just in the physical realm, as the higher-level properties may cause physical events. This raises a question: how much of a physicalism is non-reductive physicalism if it rejects the closure of physics? Reading Bennett, K ‘Why the Exclusion Problem Seems Intractable and How, Just Maybe, to Tract it’ Noûs 37 (2003): 471–497 Block, N. ‘Troubles with Functionalism’ Minnesota Studies in the Philosophy of Science 9 (1978): 261-325 Funkhouser, E. ‘Multiple Realizability’ Philosophy Compass 2 (2007): 303-315 Haugeland, J. ‘What is Mind Design?’ in Haugeland, J. (ed.) Mind Design II (Cambridge, MA: MIT Press, 2000) Kim, J. ‘Mechanism, Purpose, and Explanatory Exclusion’ Philosophical Perspectives 3 (1989): 77–108 Kim, J. ‘Multiple Realization and the Metaphysics of Reduction’ Philosophy and Phenomenological Research 52 (1992): 1-26 Putnam, H. ‘The Nature of Mental States’ in Mind, Language, and Reality (Cambridge: Cambridge University Press, 1975) Preston, J. and M. Bishop (eds.) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence (Oxford: Oxford University Press, 2002) Searle, J. ‘Minds, Brains and Programs’ Behavioral and Brain Sciences 3 (1980): 417–57 Further Reading Benacerraf, P. ’God, the Devil and Gödel’ The Monist 51 (1967): 9-33 Gödel, K. ‘Some Basic Theorems on the Foundations of Mathematics and their Implications’ in Feferman, S. et al. (eds.) Collected Works, Vol. III. Unpublished Essays and Lectures (Oxford: Oxford University Press, 1995) Lucas, J. ‘Minds, Machines and Gödel’ Philosophy 36 (1961): 112-127 McLaughlin, B. ‘Is Role-Functionalism Committed to Epiphenomenalism?’ Consciousness Studies 13 (2006): 39-66 Piccinini, G. ‘Functionalism, Computationalism and Mental States’ Studies in the History of the Philosophy of Science 35 (2004): 811-833 Turing, A. ‘Computing Machinery and Intelligence’ Mind 59 (1950): 433-460 |