Monthly Archives: February 2014

Self-Knowledge and Externalism

I just received the book I ordered from Amazon called Externalism and Self-Knowledge. The book is more or less a collection of relevant articles from different philosophers who were writing about the same topic or at least relevant to the topic. The topic is relatively new to me, so I might miss out a couple nuances, subtleties, or points. Nonetheless, I do know enough of the basics to write about them in my blog. Basically, the topic of Externalism and Self-Knowledge is a debate about whether or not Externalism is compatible with Self-Knowledge.

In the context of Philosophy of Mind, Externalism is the thesis about a meaning of mental contents. Specifically, Externalism is committed to an idea that a meaning of mental contents is at least partly determined by its environment, rather than the internal features of the mind. So, the content of my thought “The cat is on the mat” is determined by an environment that the cat is sitting on the mat. Notice, however, that Externalism is not merely stating that any content of our thoughts is caused by an environment. This is a prima facie trivial truth that even an Internalist accepts (An internalist believes that mental contents are individuated or determined by internal states). An Externalist, however, is stating that the meaning of the mental content is also constituted by its (causal) relation to an environment. So, we can individuate a meaning of mental contents based on its relation to an environment.

There is the famous thought experiment called the “Twin-earth” by Hillary Putnam. Suppose that there are the original earth and an identical planet called “twin earth”. Furthermore, twin earth also has people who are identical to us in every way (ignore the obvious question about how these people coincidentally happen to have the exact same evolutionary history as our own). There is, however, one difference: twin-earth doesn’t have H2O, but rather something that looks like water constituted by XYZ. Apparently, XYZ is called “water” in twin earth. Both Oscar and twin-Oscar have thoughts about water, but Oscar has thoughts about H2O, whereas twin-Oscar has thought about XYZ. Even though both share the same neurophysiology in an extremely similar environment, both of them have different thoughts about water. Putnam concludes that “meaning ain’t in the head”.

This thought experiment inspired many philosophers to believe in Externalism, but this leads them to a prima facie conundrum. A lot of these philosophers also believe in a semi-traditional view of Self-Knowledge inspired by Descartes. They believe that self-knowledge possesses a kind of private epistemic privilege that grants an immediate a priori knowledge of one’s mental content. However, knowing facts about one’s environment requires an a posteriori knowledge of contingent facts. On one hand, It appears that if we have an a priori knowledge of our own mental contents that are constituted by their (causal) relation to an external world, then we have an a priori knowledge of an external world. On the other hand, it could be that  knowing my own thoughts is not a priori, but rather a posteriori, since their meaning is determined by an environment. So, knowing our own thoughts might just be as a posteriori as experiencing any given environment. We have an apparent dilemma: either we have an a priori knowledge of an entire world or we do not have an a priori knowledge of our own conscious thoughts. The first horn sounds implausible, since it is already clear that we do not have an a piori knowledge about the world. The second horn is disturbing for externalists who are committed to a semi-Cartesian view of self-knowledge, since they do believe that we at least have some a priori knowledge about our own thoughts.

There is one well known thought experiment in this debate called the Switch case. The purpose of this thought experiment is to argue for the second horn of the dilemma. Suppose that unbeknownst to Oscar, he was teleported to twin earth by technologically advanced martians. Oscar is oblivious that the teleportation took place, since the place he was teleported to looks exactly like his home planet. When Oscar comes across a water-like XYZ, the content of his thought is indistinguishable from that of H2O. In fact, Oscar mistakenly thinks that the content of his thought is “H2O”. If one accepts that Oscar’s mental content is determined by his environment, then his mental content in twin-earth should be XYZ. However, Oscar mistakenly thinks that his content is not XYZ, but rather H2O. If Oscar’s self-knowledge about his content is mistaken, then it doesn’t seem like he has an a priori knowledge about his content.

It is important, however, to emphasize that the thought-experiment would clearly work if an a priori knowledge or warrant is suppose to be infallible. However, many philosophers who are committed to the existence of a priori knowledge allow that it can be defeated by evidence. So, it is not clear if the thought-experiment shows that self-knowledge is not a priori. However, one could argue that we often see self-knowledge as an area of knowledge that seems most secured than any form of knowledge. The thought experiment seems to tarnish this picture of self-knowledge beyond redemption. Nonetheless, someone else could insist that in practice the traditional view of Self-Knowledge is still valid, since the switch case never really happens. Peter Ludlow argues that the switch case does happen when we travel to different linguistic communities that use some of the same words, but with different meaning or reference.

I personally think this is an interesting topic so far, so I’m going to read the book with an anticipation of developing more interest.

The Extended Mind

One of the current debates in Philosophy of Mind is the Extended Mind debate. The debate begins with David Chalmers and Andy Clark’s article “The Extended Mind”, which argues that a cognitive process can extend beyond the brain into an external artifact. This extension happens when an external artifact also aids a cognitive process. So, if writing down numbers and equations on a piece of paper aids my cognitive process of calculating, then it is also an extension of calculation. This extension forms a coupled-system in which a cognitive process is distributed between two entities. Chalmers and Clark would give several examples, but one notable example is Otto’s notebook. Otto suffers from a mild case of Alzheimer’s, so he relies on his notebook to write down and remember the directions to his favorite museum. When Otto uses his notebook to remember the directions, he is able to go to his favorite museum. Inga, on the other hand, does not need to rely on her notebook, since her memory works just fine. Inga is able to go to her museum, because she already remembers the directions. Chalmers and Clark argue that two cases of Inga and Otto are analogous insofar as both rely on something to store and retrieve information. Chalmers and Clark conclude that Otto and his notebook form coupled-system that functions as his memory, since it functions similarly to Inga’s internal memory.

There are many proponents for the Extended Mind, but there are relatively few critics. The outspoken and well-known critics are Adams and Aizawa. In their article “The bounds of cognition”, Adams and Aizawa argue that Clark and Chalmers need to identify the mark of the cognitive before they can conclude anything about the Extended Mind. Adams and Aizawa think that the mark of the cognitive has to be non-derivative content and the causal mechanisms that process it. As far as Adams and Aizawa are concerned, Otto’s notebook does not constitute as his memory, since it does not process non-derivative content. Instead, Otto’s notebook merely possesses a derived content that derives from Otto’s mind. Furthermore, they argue that it needs to be shown that the external process must be continuous with the internal process such that both process the same non-derivative content in a similar way. So, if an external artifact really is an extension of my cognitive process, it also needs to process a non-derivative content in a similar way that my brain does. Otherwise, its not really an extension.

I honestly side with Adams and Aizawa in this debate. I think they’re correct that as far as we know cognition remains as processes of the brain. My main problem with the whole Extended Mind debate, as far as I’ve read it, is there are too many careless analogies and examples. In my opinion, I think the Otto’s notebook argument is a horrible one. The underlying process that stores and retrieves information in Otto’s head seems fundamentally different from Otto writing down information in his notebook. Chalmers and Clark used the tetris thought experiment to argue that a tetris computer is an extension of our mind. They argue that if we accept that mentally rotating a shape with or without a brain implant is a cognitive process, then why can’t we accept using a tetris computer as a cognitive process? After all, both the brain implant and the tetris computer are computational process that we rely on. My mentor Georges Rey personally pointed out to me that we are only beginning to learn how the process of mental rotation works, whereas we have a good understanding of how the process of rotating an object in a computer screen works. I think he has a point. So, not only is it too early to say whether or not a tetris computer is an extension of our cognitive process of mental rotation, but also we have very good reasons to suspect that the process of mental rotation is fundamentally different from how we rotate an object in a computer screen.

I lost some interest in the Extended Mind debate, since there really isn’t that much debate anymore. From what I learn from Elizabeth Schechter, the Extended Mind proponents mostly talk among themselves. Most philosophers who disagree with the Extended Mind aren’t very interested in the debate anymore except Adams and Aizawa. Nowadays, the Extended Mind proponents are just writing about what it means for a cognitive or mental process to be extended. Some believe that mental states can also be extended, whereas others think that only process can be extended.

I admit that this is a biased blog post about the Extended Mind. I personally lost some interest in the debate, but I’m writing about it since it’s one of the first philosophical debates I read about fairly well. I’m writing a critique against the Extended Mind as my honors thesis with my mentor Georges Rey. Rey pretty much dislikes the Extended Mind position. I recall that he even cringed reading Andy Clark’s arguments against the distinction between non-derivative content and derived content. But I digress. My honors thesis is basically about how our cerebral hemispheres (right and left) constitute a coupled system, since there is an inter-hemispheric communication between them. I assumed throughout my paper that our cerebral hemispheres is an unified agent that consist of two minds. I go further to argue that under the Extended Mind if two cognitive agents (let’s say conjoined twins) communicate with one another, then they also form a coupled system. I argued that this coupled system is analogous to the cerebral hemispheres as far as the Extended Mind is concerned. If this is the case, then wouldn’t that coupled-system constitute as a unified agent? After all, if a coupled system is analogous to the cerebral hemisphere, then why can’t a coupled system of two cognitive agents constitute as a unified agent?

This is pretty much my reductio ad absudum argument against the Extended Mind. I argued that someone who isn’t an Extended Mind proponent like Adams and Aizawa would suspect that the process of interhemispheric communication is fundamentally different from that of personal communication between two normal cognitive agents. Georges Rey pretty much wants me to spell that out and explicitly emphasize it from a theoretical point of view. I have yet to spell out that difference.

Self-Knowledge and Dreams

In philosophy, specifically Epistemology, dreams can be used as a thought experiment for philosophical skepticism against our knowledge of the external world. In fact, Descartes uses it as a method of radical doubt to distinguish what we know with absolute certainty from what we know with a degree of certainty. Descartes seems to think that our self-knowledge is secured in a dream scenario, but I think I want to challenge that view. I personally think that even in dreams self-knowledge has its limits.

I want to begin with what inspired this blog post. It starts from Peter Carruther’s lecture where he talks about Dretske’s view on self-knowledge. According to Dretske, self-knowledge is essentially about the way the world is presented to me. In other words, when I see a red apple, I know that I seem to see a red apple. Notice, it is important to see that Dretske is not arguing that we know something about reality from perception. Instead, Dretske is talking about self-knowledge about our perceptual mental states. During class, I brought up an objection that Carruther thought was a nice one. In my dream, I have an experience of seeming to see a red apple. This “dream” mental content of seeming to see a red apple seems indistinguishable from seeming to see a red apple in my wakeful state. So, how do I tell a difference between a mental content in a dream state from that of my wakeful state? Carruther’s reply was the following: a “dream” mental content is in fact distinguishable from a “real life” mental content. Mental contents from dreams are less vivid and detailed than those of real life.

I’m not completely convinced by Carruther’s reply, but I can’t exactly say why. Perhaps its because dreams can be very convincing and vivid to us during our dream state, but when we wake up from our dream we see it as less vivid in hindsight. I also wondered if there are any dreams that are just as vivid as our waking state. I think I want to push my objection a bit further in this blog in order to show that even self-knowledge in dreams isn’t as reliable as Descartes and his successors thought. One caveat to consider is that not many philosophers today accept Descartes’ view of self-knowledge. But many of them still accept an idea that self-knowledge of at least some of our mental states (usually conscious mental states) comes as close to infallibility. This is the popular view that I want to challenge in this blog.

Here is the argument in the form of premises and a conclusion:

  1. Self-knowledge is knowledge about our conscious mental states
  2. We have a special epistemic privilege with regards to our conscious mental state.
  3. A waking state is a conscious mental state
  4. Therefore, we have a special epistemic privilege with regards to our waking state.
  5. If we have a special epistemic privilege with regards to our waking state, then we should be able to discriminate it from our dream state.
  6. But we often confuse our dream state with our waking state
  7. Therefore, we do not have a special epistemic privilege with regards to our waking state.

I suspect many philosophers about self-knowledge are committed to (1), (2), and (3). My argument tries to show that if they are committed to (1), (2), and (3) then they are committed to (4) as the conclusion. However, I  argue that (4) is implausible given that we often confuse our dream state with our waking state. When we are in a dream state, we do not seem to have a discriminatory capacity to discriminate it from our waking state. In an interesting case of false awakening, we sometimes dream about waking up from our dream. We have a dream within a dream. When we seem to wake up from our dream, we often believe that we are in a waking state. Can we seriously think that we have some kind of epistemic privilege with regards to our waking state in that kind of scenario?  I think not. After all, the belief that I’m in a waking state is false in a dream scenario.

There are several ways to object to my argument. First is that I never defined what I mean by a “waking state”. Obviously, its difficult to define a waking state. One of the main reasons why its difficult to define a waking state is that we have to define it in such a way that excludes dream states. After all, both waking and dreaming states are conscious mental states. Perhaps one could argue that the term “waking state” is misleading, since there really isn’t a specific mental state called a “waking state”. Instead, a “waking state” is just another way of saying that there are mental states that take place when one is awake. In other words, one’s wakeful mental states are due to external inputs from the environment through one’s perceptual systems (i.e. visual, auditory, olfactory, etc).

I doubt that a waking state defined above is adequate, since there are often cases when one is still in a dream even if one’s mental state is due to some external input of one’s perceptual system. I remembered one time when my friend use to have a dream of the tickle monster. The tickle monster would tickler her in her dream. Ironically, it turns out that as she was dreaming it was her sister who ticked her. The external input (i.e. tickling) still had an impact on my friend’s mental state, but she was still dreaming. There are many dreams like that when one’s dream is effected by one’s environment.

Second is that I never clarify what I mean by “special epistemic privilege”. Consequently, it is unclear if having a special epistemic privilege implies being able to discriminate a waking state from a dream state (vice versa). I think a “special epistemic privelege” can have at least two interpretations. The first interpretation is that one has an infallible authority with regards to our conscious mental states. I think this interpretation is too strong, since there are cases when one initially confuses one’s hallucinatory state with a veridical perceptual state. The second interpretation is that we have a highly reliable authority with regards to our conscious mental state. This epistemic authority is not infallible, but it is extremely reliable. I think this interpretation is more plausible than the first one. But do we have an epistemic privilege of this kind with regards to our waking state? If we did, then we should be able to tell our dream state from our waking state, but in many cases we don’t. 

Perhaps one can argue that we have an epistemic privilege with regards to any conscious state whether they occur in a dream or not. I think this is a natural response to my argument. However, I think this response gives rise to another problem. Do we have the same kind of epistemic warrant with regards to our wakeful mental state about X as we do with our dream mental state about X? I won’t go into this problem, since it leads to another philosophical debate between internalism and externalism about epistemic warrant.

One could make a distinction between judging one’s mental state to be a waking state and other phenomenal qualities about one’s mental state. For example, my friend who dreamed about the tickle monster misjudged her mental state of tickling to be a waking state, but she did not misjudge her phenomenal experience of feeling tickled. However, I argue that anyone who is committed to (1), (2) and (3) is also committed to (4). I don’t deny that my friend is epistemically entitled to believe in her phenomenal experience of feeling tickled. What I do argue is that if someone is committed to (1), (2), and (3) then one is also committed to an idea that my friend is epistemically entitled to believe in both the phenomenal experience of tickling and her belief that she is in a waking state. After all, both the phenomenal experience of feeling tickled and a waking state are conscious mental states.

I have to admit that I must do more reading about the philosophical literature about self-knowledge. Perhaps I also need to do some reading about dreams from a scientific point of view. I don’t think my arguments here are that strong and rigorous. Quite frankly, I don’t think I gave enough arguments. Most of them seem to be based on my intuitions, but I thought the blog post was at least interesting enough to post.

Chomsky and the Mind-Body Problem

In my “about” section, I mentioned in the “trivia” sub-section that I personally interviewed Chomsky. It was a pretty intimidating situation and I had no idea what to say to such a great intellectual figure,  so I break the ice by revealing to him that I’m not a great conversationalist. Amused, Chomsky replies that he’s not a great conversationalist either. For most of the interview we talked about his personal take on the mind-body problem.

Overall, Chomsky thinks that ever since Newton discovered an action from the distance (i.e. Gravity), the mechanical view of the world that Descartes espouse had been shattered. Prior to Newton’s discovery, the popular and standard view of intelligibility is a mechanical explanation that consists of appealing to an idea of physical cause or contact. It is precisely from this standard view of intelligibility that the mind-body problem became a philosophical problem for Descartes. If an intelligible explanation requires a physical contact, then how can we explain our mind’s apparent causal relation with its body? When I experience pain, my mental state of pain causes me to twitch and scream. However, if that mental state isn’t physical, then its causal relation with my screaming and twitching is not a physical kind. So, my mental state makes no physical contact with my bodily behavior by virtue of its non-physicality. Under Descartes’ Mechanical Philosophy, my experience of pain cannot cause my bodily behavior. However, like most people, Descartes believes that there is a causal relation between the mind and the body. But what is this causal relation if not a mechanical one? If the mechanical account of explanation is the only game in town, then how can we explain the mind-body relation?

Descartes didn’t have a satisfying answer to this problem. However, Newton’s discovery of gravity shows that a causal explanation doesn’t require physical contact, but rather a massive object can effect another from a great distance without ever touching it. This discovery shattered the very framework of Descartes’ mind-body problem. So, given that the Cartesian framework for the mind-body problem was dissapated by Newton’s discovery, how can we ever state the mind-body problem? What would precisely make the mind-body problem a philosophical problem? Chomsky thinks that we really don’t have anything like Descartes’ mechanical philosophy to recreate the mind-body problem. The mind-body problem can never really be stated anymore.

Chomsky relates to this argument to the contemporary mind-body problem. He argues that unless we have an account that explains the nature of physicality we don’t really have a mind-body problem. We use to have a mind-body problem during Descartes’ time, since we also use to have an account of physicality. Now its gone. Couldn’t we just create another account of Physicalism that is just as intelligible as Descartes’ Mechanical Philosophy? It sounds easy, but Chomsky argues that there is a dilemma for anyone who tries to attempt to create an account of the physical. We can either rely on our a priori intuitions to define the Physical or we can rely on our a posteriori science to do that for us. If the former, then we’ll be disappointed, since our a priori intuitions about physicality just doesn’t match up with what physics tells us. We believe that our body, among other things, is solid when physics tells us that its mostly constituted by empty space. We believe that time is absolute when in fact its relative. Our a priori intuition places no constraint on how we do physics. If the latter, then our notion of the physical is not stable and definite, since it changes radically as we make scientific discoveries about space-time, fields, dark matter, dark energy, black holes, electromagnetic force, and others. Either way, we cannot give a satisfying account of the physical [1].

I personally appreciate Chomsky’s critique against the mind-body problem. While others such as my mentor Georges Rey believe that the critique is completely misguided, I think there is something to Chomsky’s critique. What Chomsky is asking isn’t unreasonable: what makes the mind-body problem a philosophical problem? Its a philosophical problem for Descartes before Newton’s discovery, but why should it be a philosophical problem for us today? I think Chomsky’s demand is a reasonable one and I’m writing about this in my independent study with Paul Pietroski.

One point to make is that the mind-body problem does not refer to a single philosophical problem, but rather there are several mind-body problems with regards to consciousness, intentionality, and others. It isn’t clear which one Chomsky is referring to, but I assume what he has in mind is all of these versions of the mind-body problem. My paper focuses on the mind-body problem with respect to intentionality, since I have a personal interest in the intentional nature of mental representations. I think the direction I’m going to take is to argue that there are at least two kind of problems for intentionality: the explanatory role of content and the naturalistic conditions for intentionality. The explanatory role of content is about how the intentional content of our mental states cause our behavior, whereas the naturalistic conditions for intentionality searches for a naturalistic condition for what makes our thoughts about something.  I’m going to argue that both of these problems of intentionality are philosophical problems unaffected by Chomsky’s critique against Physicalism. This is particularly hard, since this means I have to avoid presupposing what Chomsky is questioning, namely Physicalism. I’m basically trying to argue that the mind-body problem (intentionality) is still a philosophical problem even if there isn’t a coherent and intelligible account of Physicalism. I’m not exactly sure what arguments I have in mind, but I hope I can find them soon.

[1] This is Jeff Poland’s explanation of Chomsky’s argument against Physicalism. Check out Poland’s article from “Chomsky and his Critics”.

Speech Utterance and the Potential Skepticism of Derivative Content

[This is my first blog post and I don’t expect it to be on par with an academic paper, but I’ll do my best to keep it interesting.]

We normally assume that our speech utterances have some meaning (in this case derivative content) encoded into them, but some philosophers such as Georges Rey find this assumption to be doubtful. Rey argues against what he calls Standard Linguistic Entities (SLE), which are derivative contents encoded into our speech utterances. There are at least three arguments worth mentioning. The first argument is that two words can seem to have different meaning, but their acoustic properties are virtually indistinguishable. An example one could use is “Lock” and Locke”, but the former refers to an artifact, whereas the latter refers to a person.* Both terms, however, share the same acoustic properties. The second argument is that sentences have syntactic structures of noun and verb phrases, but we can’t find those structures by observing acoustic properties of our verbal utterance. The third argument is that by examining the spatiotemporal region of my utterance, I cannot find any causal structure that individuates my utterance in terms of SLE.

I think Rey provides some interesting arguments, but if SLE does not exist at all then I wonder whether or not any derivative content exists at all. We take for granted that a sign post, a written paper, or a picture has meaning, but do they literally possess something like a SLE? One could say that any artifact possess a derivative content insofar as we intend to give an artifact its meaning and take some appropriate measures to do it. So, all things being equal, if we intend to give an artifact (or medium) some meaning and take appropriate measures to do it, then it has derivative content.  However, if this is the case, then our speech utterance should have meaning too, since when we utter sentences we intend to give them meaning. However, if what Rey argues for is correct, then our speech utterances cannot have any meaning whatsoever even if we intend to give them meaning and attempt to realize that intention.

I’m not arguing that Rey’s position commits him to the denial of derivative content in general. Instead, if Rey’s position is correct, it seems that someone who is a realist of derivative content has to do more than just say that derivative contents are realized in an artifact or a medium by virtue of one’s intention and effort. A realist of derivative content has to come up with some kind of account as to how an artifact derives meaning from its user. I don’t pretend to offer a realist account, since I am personally unsure if derivative contents do exist (which may sound incoherent since I am writing this blog post). I do, however, think that I can provide an account alternative to realism of derivative content.

While I am a realist about non-derivative intentionality, I do think that perhaps its possible to have an intentional stance account of derivative intentionality. Daniel Dennett came up with what is known as an intentional stance. According to this account, propositional attitudes really do not exist in a strict sense, but we attribute them to a certain pattern that from which we can make successful predictions. I propose a slightly different account called derivative intentional stance. Under this account, derivative intentionality does not exist in a strict sense, but we can attribute some meaning to a certain pattern created by an agent (including oneself) who intends to use it as a message to any interpreter. This intentional stance presuppose that there are other minds as its background knowledge, but instead of understanding behavior we try to understand the pattern that is created by an agent with an intention to send us a message.

So, for example, we may come across an acoustic pattern emitted by an agent through verbal utterance and attribute derivative meaning unto it. This attribution does not mean that the meaning derives from the interpreter, but rather the interpreter attributes meaning to a pattern as if it has a meaning that derives from an agent who created that pattern. So, SLE may not exist, but I attribute SLE unto the acoustic patterns created by a speaker as if it really exists as a derivative content.

One possible objection against this account is that there are cases when I just see a pattern and attribute some meaning to it without the presence of the original creator of that pattern. A common example might be a sound recorder that emits an acoustic pattern that I attribute some meaning to. One reply against this objection is that as long as one believes that the pattern is created by an agent one can simply attribute meaning to that pattern.

I’m not going to give further arguments for this account, but later on I might try to give a more robust explanation and argument for it in a later post. I hope this is at least a decent first post for a philosophy blog.

*This is my personal example