Why Searle’s Biological Naturalism Is Chauvinist Functionalism

John Searle is a well known critic of the Computational Theory of Mind (CTM) or Strong A.I. due to his Chinese Room thought experiment. Naturally, he is also a critic of Functionalism given CTM’s popularity among many functionalists sympathetic to it. At one point, Searle expressed contempt towards Functionalism by saying: “If you are tempted to functionalism, I believe you do not need refutation, you need help” (Searle, 1992). Clearly, Searle is hostile towards both CTM and Functionalism. Moreover, Searle also eschews dualism of any kind from Cartesian Dualism to Property Dualism. So, if Searle is neither a property dualist nor a functionalist, then what is his position?

Searle proposes what he considers to be a genuine alternative to both Functionalism and Property Dualism. He calls it Biological Naturalism . What exactly is Biological Naturalism? To put it crudely, Biological Naturalism states that mental states are the right kind of causal powers that can be realized in an appropriate biological organism. This shouldn’t be confused with Type Materialism which states that any type of mental state is identical to any type of neurological state. Unlike Type Materialism, Searle allows that in principle non-carbon life forms can possess an analog to a complex nervous system with mental states. 

Searle often compares mental states to phase states such as liquid and solid. The point of his comparison or analogy is that just as phase states consist of emergent properties of water our mental states are states of matter with emergent biological properties. In other words, mental states cannot be reduced to the simple constituents of neurons, but rather they emerge as a complex pattern from an overall interaction among them. In this sense, mental states are not ontologically reducible to the biological organism, but it is causally reducible insofar as they are causal byproducts of our nervous system. 

However, it is unclear to me how this is suppose to be a genuine alternative to Functionalism. One argument was raised by Georges Rey who points out that the idea that mental states are the “right causal powers” of the brain can be described in Ramsified Sentence that is congenial to David Lewis’ Functionalist theory. In fact, David Lewis uses Ramsified Sentences to formulate his Functionalist theory (A.K.A. Analytic Functionalism). Moreover, David Lewis would agree with John Searle that mental states can be realized in biological organisms, but not necessarily computational systems. Similarly, D.M. Armstrong, who proposes the Causal Theory of Mind, would agree that mental states are causal states of the central nervous system. 

What both David Lewis and D.M. Armstrong have in common is that they would be considered Chauvinist Functionalists. The term is coined by Ned Block in Troubles With Functionalism. In this context, a strict chauvinist is a functionalist who believes that mental states are causal roles of biological organisms, but does not believe that such a state can be realized by anything else. A liberal is a functionalist who believes that mental states are causal roles or realizers that can exist in any system as long as it is loosely similar to the causal input-output structure of our mind. So, a chauvinist emphasizes on the fine grain description of mental states such that it exclusively applies to biological organisms, but not the nation of China.

So, if Lewis and Armstrong are chauvinists, would Searle’s Biological Naturalism count as another version of chauvinist functionalism? After all, Searle thinks that mental states are the right causal powers of the central nervous system, but this phrase is ambiguous. It’s sufficiently ambiguous such that it is open to several interpretations, including a chauvinist functionalist interpretation. In other words, the “right causal powers” are causal states that exist exclusively among biological organisms with a central nervous system of some sort. 

Searle might argue that there is one huge difference: mental states are emergent biological properties causally produced by our central nervous system. The key term here is “emergent”, because if it is emergent then it can’t be reduced to individual neurons. Moreover, it can’t be reduced to the input-output structure among neurons. Perhaps this is what Searle has in mind. However, the problem is the term “emergent”. If I understand the term correctly, it is often used to refer to the phenomena of self-organization.

For example, a snowflake has an emergent crystallized structure that cannot be reduced to H2O molecules. A functionalist might ask “why can’t the sophisticated internal input-output structure also be an emergent causal pattern among neurons?”. After all, the mind could just be an emergent pattern that consists of input-output structure that cannot be exhaustively described in terms of individual neurons. Let’s use an analogy: our human biology cannot be reduced to the number of our genes, but rather it is more or less appropriate described in terms how genes interact with each other, including the level of epigenetics. Likewise, our mind cannot be reduced to the number of neurons (approximately 86 billion neurons), but also the causal interaction among them. This causal interaction among neurons can be described in terms of a very complex  input-output structure. 

Searle might argue that such causal powers cannot be explained in terms of computation or algorithm. So, his theory is not an instance of functionalism. This may not be a charitable interpretation of Searle, but it is plausible given that in the past he has conflated both computationalism and functionalism together as if they were synonymous. However, being a functionalist does not entail that one is a computationalist (see Piccinini, 2009 who explains the difference and David Chalmers (1992) who briefly points out that they are not synonymous). A functionalist who is not a computationalist may argue that while mental states can be characterized in terms of their causal roles in relation to an entire causal system (mind), they are not computational (again, look up Piccinini where he explains the difference between functionalism and computationalism). 

So, again, why isn’t Searle’s Biological Naturalism just another chauvinist version of Functionalism? After all, he hasn’t provided an explicit non-functionalist interpretation of “right causal powers” incompatible with every form of functionalism. That being said, I don’t think Searle has provided a genuine alternative to Functionalism. At best, Searle is against liberal functionalism or CTM. I could be wrong, since many philosophers like David Chalmers and Edward Feser believe that Searle is a property dualist. I think this means one thing: Searle hasn’t provided a clear and articulate alternative philosophical account. It’s ambiguous to the point that it is open to a functionalist or dualist camp. Even though Searle has explicitly rejected both functionalism and property dualism, he hasn’t provided a theory that is genuinely different from either camp. 

Advertisements

8 thoughts on “Why Searle’s Biological Naturalism Is Chauvinist Functionalism

  1. JW Gray

    What is your definition of emergence and your definition of functionalism?

    Searle says the mind is greater than the sum if its parts. He also seems to think the mind is physical, but not physical in the sense of things like atoms and so forth.

    Reply
    1. philonous13 Post author

      James,

      “What is your definition of emergence and your definition of functionalism?”

      Defining functionalism is very difficult because there are many different versions of functionalism. I think functionalism is more or less an umbrella term that encompasses theories that try to understand the mind in terms of a system in which mental states play a causal or functional role understood in relation to other parts of that system. Whether or not it can be realized in other systems beside a nervous system, a huge portion of funcitonalists do think that it can be realized in different systems (i.e. multiple realizability), but there are few who think that it can only be realized in the brain.

      As for emergence, I understand it has a property of the whole that cannot be reduced to the mere constituent of parts. Usually, the property of the whole is due to the interaction or complex interrelation among parts (i.e. geometric shape of the snowflake is due to how molecules bond together in a certain way). I hear that there are different definitions of emergence, but It’s unclear to me why they have to be incompatible with functionalism. After all, a generic functionalist understands a mental state in terms of it’s causal relation to other mental/physiological states, rather than just an isolated state. This seems compatible with emergence.

      Reply
      1. JW Gray

        I think Searle understands functionalism as a type of identity theory saying x or y or z is identical to something else. That itself could be seen as a type of reduction that I don’t think Searle would agree with. He thinks the mental is a type of physical thing on its own without needing to be identical with any other physical thing or function.

      2. philonous13 Post author

        “I think Searle understands functionalism as a type of identity theory saying x or y or z is identical to something else. That itself could be seen as a type of reduction that I don’t think Searle would agree with. He thinks the mental is a type of physical thing on its own without needing to be identical with any other physical thing or function.”

        Ok, In what sense is the mind a “physical thing”? So, I suspect what you mean to say is that Searle thinks that the mind is an irreducible “physical thing” yet causally dependent on the central nervous system. However, it’s unclear what makes this irreducible “physical thing” physical. That’s where the accusation of property dualism comes in. Searle thinks that the mind is a physical entity that is irreducible yet causally dependent on our central nervous system. But the question would be how is that entity *physical*. If the mind is irreducible, then it cannot be described in terms of something else that we are already familiar with as physical (i.e. function, computation, neuron, etc). The physical predicates or vocabularies that apply to physics, neurophysiology, and the computational properties of our nervous system just don’t extend to the mind given that it’s irreducible. If these physical predicates can’t be applied to the mind, then it’s unclear what makes the mind an irreducible “physical thing” physical. If Searle can’t explain how it’s physical, then his position “Biological Naturalism” risks collapsing into property dualism.

        I think another alternative is to admit that the mind, which Searle calls the “right causal power” of the brain, is a complex set of functional roles interrelated to one another as a causal system of the brain. In other words, he could just admit that he’s a chauvinist functionalist. There is one evidence to support this alternative. He made an analogy between photosynthesis and the mind. He points out how photosynthesis is a biological process that plants use to convert light into energy. Likewise, he thinks that the mind is some “right causal power” of the mind. However, this analogy is consistent with chauvinist functionalism. In fact, D.M. Armstrong’s Causal Theory of Mind (a.k.a. Central State Materialism) states the the mind is just physical processes of the central nervous system. If Searle balks at this position, then it just shows how his analogy isn’t enough to clarify his position as incompatible with chauvinist functionalism.

  2. JW Gray

    I took Searle’s class and he said that a robot might be able to have a mind (as far as he knows). He has no idea how it could happen, but he doesn’t reject every type of multiple realizability. He might reject some ideas of multiple realizability and not others.

    Reply
    1. philonous13 Post author

      James,

      “I took Searle’s class and he said that a robot might be able to have a mind. He has no idea how it could happen.”

      This is interesting, because in many of his works he explicitly rejects that a robot (which is an autonomous computational system that acts according to algorithms) can have a mind. That was the main reason why he developed the Chinese Room thought experiment to show that symbol manipulation cannot engender understanding.

      ” but he doesn’t reject every type of multiple realizability. He might reject some ideas of multiple realizability and not others.”

      Yes, I’m aware of that. He is open to the possibility that mental states can be realized in non-carbon life form (which is what I stated in my blog). However, Searle doesn’t think that extends to any A.I. robots. Consequently, he narrows the spectrum of things that could have minds. In this sense, he accepts a restricted version of multiple realizability, but that is compatible with a chauvinist version of functionalism. If his views are compatible with a chauvinist version of functionalism, then I don’t see how biological naturalism is a genuine alternative to it.

      Reply
      1. JW Gray

        He thinks of AI/computation from a certain perspective — really as something conceptual and non-physical. It is the 1s and 0s used in programs. The actual physical processes we call computers are meant to use programs, but there is something physical going on rather than conceptual. He thinks the mind must be caused by actual physical processes.

        A robot would not have a mind (according to Searle) from 1s and 0s/conceptual/computational) types of things. It would be because of actual physical processes. But we could theoretically find a way to bridge the gap. There could be something we intend to be computational plus something else to make a mind function a certain way.

      2. philonous13 Post author

        “He thinks of AI/computation from a certain perspective — really as something conceptual and non-physical. It is the 1s and 0s used in programs. The actual physical processes we call computers are meant to use programs, but there is something physical going on rather than conceptual. He thinks the mind must be caused by actual physical processes.”

        I think you are referring to what’s known as “digital computation” (it computes distinct values like 1s and 0s). It’s true that computation can be thought in a very abstract and formal manner without any regard to the physical processes. However, this is an idealization (i.e. thinking about laws of motion without considering friction). In reality, computation has to be implemented in order for it to exist in the real world. In order for it to be implemented the computational process has to be isomorphic to the physical process (for example, a circuitry that leads to two possible states is isomorphic to disjunctive logic gate). Proponents of Computational Theory of mind think that the mind is a computational process isomorphic to the physical processes of the nervous system. Even if the mind isn’t a digital computation, there are other forms of computation such as analog computation (it computes continuous, rather than distinct, values). Picinnini is known for finding an alternative definition of computation that doesn’t involve classical digital computation.

        “A robot would not have a mind (according to Searle) from 1s and 0s/conceptual/computational) types of things. It would be because of actual physical processes. But we could theoretically find a way to bridge the gap. There could be something we intend to be computational plus something else to make a mind function a certain way.”

        It seems to me that Searle thinks that “actual physical process” and “computation” are mutually exclusive to each other. I don’t see why they should be mutually exclusive. After all, he’s considering computation only in terms of idealization, but cognitive scientists (including cognitive neuroscience) try to understand the mind in terms of the following: (1) the computational processes of the mind (2) how those processes are implemented by the brain. That’s understanding computation without too much idealization. I think the problem with Searle is that he’s ignoring how cognitive scientists (including neuroscientsts) depend on computational models to understand how the mind works.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s