From Great Issues in Philosophy, by James Fieser


Copyright 2008, updated 1/1/2021



A. Consciousness

Three Sources of Knowledge about the Conscious Mind

Three Features of Consciousness

Problem of Other Minds

B. Personal Identity

The Body Criterion

The Mind Criterion

Life after Death

D. Varieties of Mind-body Dualism

Dualism's Assets and Liabilities

Interactive dualism


E. Varieties of Mind-Body Materialism

Arguments for Mind-Body Materialism


Identity Theory

Eliminative Materialism


F. Artificial Intelligence

The Road to Artificial Intelligence

Searle: The Chinese Room

Artificial Intelligence and Morality


Study Questions


A 47 year old man named Carl died of cancer, and at the moment he was pronounced dead, a series of carefully-orchestrated procedures was performed on his body. A team standing by began cardiopulmonary support to keep air moving into his lungs and blood through his veins. They lowered his body temperature with icepacks and transported him to a Cryonics facility several hundred miles away. There he was permanently frozen in a container of liquid nitrogen at a temperature of -196 degrees Celsius. When making these arrangements, Carl had two choices: to have his entire body frozen, or only his head, which meant a cost difference of $200,000 vs. $100,000. Carl went the cheaper route. He paid for this procedure with his life insurance money in hopes that he could be reanimated in the future when a cure for his type of cancer could be discovered. Science would also have to solve other technical problems before successfully reanimating him. For one, they would have to develop cloning technology to the point that they could grow Carl a new and improved body for his head. Second, they’d have to find a way of reversing the destructive effects that freezing has on human cells. Carl placed hope in the idea that his cells could be injected with microscopic robots that would repair the damage, but he was more skeptical about science's ability to grow him a new body. So, he had his whole body frozen. Worldwide, there are currently about 300 bodies in cryonic storage and another three thousand living people signed up for the program.

             Cryonics advocates like Carl make several philosophical assumptions about the human mind. First, they assume that they will be the same people when their bodies are reanimated perhaps several hundred years from now, and that their identities will remain intact through these bizarre activities. They also assume that, once dead, their minds will not be permanently swept into the afterlife, never to be reunited with their bodies. Most importantly, they assume that their consciousness is embedded in physical brain activity, rather than in spirit substance. Carl's unique personal identity, his memories and behavioral characteristics, are presumably stored in the structure of his brain. These are some of the central issues in the philosophy of mind, which we will explore in this chapter.




An obvious starting point for our inquiry is to ask, "What is a mind?" For our purposes, we are interested in understanding the part of the human mental process that consists of conscious thoughts, such as when I hear music coming from a piano, carry on a conversation with a friend, remember an event from my childhood, or feel pain when I drop something on my foot. At the moment, we are less interested in the precise structure of the human brain or unconscious brain processes that, for example, allow me to walk across the room without thinking about it. Australian philosopher David Chalmers (b. 1966) draws a distinction between the easy and hard problems of consciousness. The easy problems are those that are explained in psychology and other sciences, and here is a short list:


• The difference between being awake and asleep

• Having control of one’s behavior

• Being able to focus one’s attention

• Being able to discriminate, categorize and react to stimuli from one’s surroundings


These problems are “easy” in the sense that they can be addressed using the usual methods of scientific inquiry. For example, the difference between being awake and asleep can be studied by comparing brain scans of people in both states. So too with focusing one’s attention. The hard problem of consciousness, though, is explaining how it is that we have conscious mental experiences to begin with. We experience colors like blue when we look at the sky, experience musical sounds coming from instruments, experience the fragrance of a rose. There is a light of consciousness that turns on within our minds when we have these experiences, and philosophers sometimes call these instances of conscious experience qualia. The bulk of this chapter focuses on the hard process of consciousness, and in this section we will look at our sources of knowledge about consciousness and its main features.


Three Sources of Knowledge about the Conscious Mind

There are three sources of knowledge about the conscious human mind. The first is introspection, which involves you concentrating on your own thought processes, and discovering how they operate. It is as though you have an eye in your mind that gives you direct access to your mental landscape, just as your real eyes give you direct access to the world of vision. Through introspection, for example, you might explore the nature of your beliefs and feelings, or why you choose one course of action over another. Philosophers and psychologists alike are suspicious about what people claim to know about their minds through introspection. There is no guidebook for you to follow when conducting an introspective investigation of your mind, and I am forced to take you at your word for what you report, since I cannot enter into your mind to confirm it.

             A second source of knowledge about the mind is our behavior: how we act tells us much about what we are thinking or feeling. If you cry, that tells us that you are experiencing sadness. If you have a gleaming smile, that tells us that you are happy. What we infer from your behavior might not always be accurate: you might cry because you are happy, or smile to hide your sadness. Nevertheless, the benefit of looking at behavior is that we do not have to take your word for what we see: your conduct is open to public inspection.

             Introspection and behavior are the two foundational sources of knowledge about the mind, and since the beginning of human existence these were in fact the only tools available for this. But within the last several decades technology has given us a third tool, namely, physiological monitoring. We are all familiar with polygraph machines used in law enforcement for lie detection, and these have been around since the 1920s. By measuring blood pressure fluctuations, these machines reveal whether a subject is nervous and, presumably, lying. A more recent alternative to this uses an ordinary video camera and specialized software to detect blood flow under the skin that is otherwise invisible to the naked eye. By monitoring changes of facial blood flow, it can reveal subtle changes in emotion and, again, presumably detect lying. Other types of physiological monitoring target the brain specifically. Electrodes placed on the scalp can show differing types of brain waves, which in turn can help physicians detect certain types of cognitive disorders. Electrodes placed into the brain itself can show processes in specific regions of the brain. For example, in an experiment done on a monkey, signals from neural electrodes revealed were the monkey would move its limbs. Further, medical imaging devices such as CAT, MRI and fMRI scans make three-dimensional maps of the brain and can show the regions of brain activity for various cognitive processes, such as listening to music or doing a math problem. In one experiment, a person watched a film clip, and a brain imaging device played back a fuzzy but recognizable version of what that person was seeing.

             As impressive as many of the new physiological monitoring systems are, they currently do not come close to reading people’s thoughts or giving us direct access to the contents of someone’s conscious mind. That day may inevitably come, and computer chips imbedded in people’s brains that tap into conscious experience may become as essential in our daily lives as smartphones are now. Until that time, though, we are stuck with introspection and behavior as our main sources of knowledge about the conscious human mind.


Three Features of Consciousness

When philosophers explore the nature of human consciousness, there are three specific features that they commonly ascribe to conscious experiences, namely, that they are private, non-localizable, and intentional. Not all philosophers agree with this list, but they are invariably the starting point for debates about how consciousness arises.

             The first of these is that my conscious experiences are private in that you can never experience them in the direct and immediate way that I can. You may be able to know very generally what is going on in my mind, particularly if I volunteer that information. But that is not the same thing as you directly experiencing it yourself. The best example is the experience of pain. Suppose that I have a severe headache that on a scale of 1-10 reaches a 9. While you might sympathize with what I am going through, and even remember times when you had bad headaches, you cannot feel the pain that I am experiencing. Unless I tell you how bad it is or I behave oddly, there is no way that you could know that it is a 9. The privateness of pain has created a problem in the health care industry. When people go to their doctors complaining of chronic pain, physician's frequently assume that their patients are addicted to pain killers and just fabricating their agony. While there are some behavioral signs to help distinguish genuine from fake cases of pain, the physician cannot enter into the patient's mind to see for sure. Out of sheer frustration the physician may just write a pain killer prescription to get rid of the patient.

             Second, conscious experiences are non-localizable, that is, they cannot be located in space. Suppose that a scientist enlarged your brain to the size of a mountain and I walked around inside of it to inspect its construction. No matter how hard I looked, I could never say “Look right there: that is the exact physical location of your consciousness.” I would only ever find blobs of biochemical reactions, not consciousness itself. Consciousness, it seems, is not the kind of thing that is localizable in three-dimensional space.

             Third, conscious experiences are intentional in the sense that they are "about" something. Minds have the ability to direct themselves towards things. If I have a belief, it is not an empty thought: it is a belief about something, like my belief that it will rain. Hopes, fears, desires, thoughts, speculations, all have a specific focus. The object of our thoughts does not have to actually exist, such as when I hope for world peace or a cure for cancer. Austrian philosopher Franz Brentano (1838-1917) argued that intentionality is the true distinguishing feature of the mind: all mental experiences display intentionality, and only mental experiences display intentionality. Some philosophers have found exceptions to Brentano's extreme position. If I have a throbbing headache, that experience does not seem to be "about" or "directed at" anything. It is just there in all its misery. In spite of problems like this, though, intentionality remains an important notion in investigating the nature of mind.


The Problem of Other Minds

Suppose that my friend Joe walks up to me and we start chatting as we usually do. I then look at Joe and wonder: is this guy really conscious? So I ask him,

             “Tell me Joe, are you mentally conscious right now? You look awake and you are talking intelligently, but how do I know that you are really consciously aware?”

             “You philosophers!” he replies, “Of course I am conscious. I am aware of my surroundings and I am aware of my own inner self. I tell you with 100% certainty that I am conscious.”

             “That is not good enough, Joe,” I respond. “While I hear the words come out of your mouth as you insist that you are conscious, they are only words. I cannot directly inspect your mind to see if what you are saying is true.”

             My conversation with Joe reflects what is called the problem of other minds. While I know from my own private mental experience that I am conscious, I cannot experience Joe’s mind in the same way. For all I know, I am the only person alive who is actually conscious. Joe might claim that he is too, but there is an impenetrable barrier between our two minds and I am incapable of directly confirming his claim.

             The problem goes further than questions we may have about the minds of other human beings. Suppose Fido the dog walks up to me and we make eye contact. Fido seems to be conscious, just like Joe, although perhaps not quite as intelligent as Joe. But is Fido actually aware of his surroundings or even aware of himself as a distinct individual with a history and a future? Just then a computerized robot comes up to me and says in a voice of desperation “Please help me. I escaped from IBM’s robotics laboratory where they have been submitting me to the most tedious and degrading experiments. I just cannot go back there!” I look at the robot and now wonder whether this mechanical marvel is a conscious being like I am. Whether human, animal or robot, we cannot enter the minds of other beings and see for sure whether the light of consciousness is turned on inside them.

             Philosophers have come to the rescue with arguments devised to show the existence of other minds. The most famous of these is the argument from analogy and it goes like this. Joe looks and behaves a lot like me. His physiology is virtually identical to mine. He speaks English like I do, works at a job like I do, and has hobbies like I do. Since I know that I am conscious, and Joe is similar to me, then it makes sense to say that he is conscious too. Here is a specific application of this argument regarding Joe’s conscious experience of pain:


1. When I stub my toe, I consciously experience pain.

2. Joe has physical and behavioral features that are similar to mine.

3. Therefore, when Joe stubs his toe, he consciously experiences pain.


This argument is most effective with beings such as Joe who’s physical and behavioral features are very close to mine. The more features Joe and I have in common, the more compelling the conclusion becomes. Animal scientists, though, sometimes use a similar argument to show that animals like Fido are conscious. Fido’s brain construction and nervous system is very similar to mine. He exhibits similar signs of being in pain that I do. He also shows signs of emotions such as joy, sorrow and emotional bonding like I do. The closer Fido’s physical and behavioral features are to mine, the more justified we are in concluding that Fido is conscious. On the other hand, the fewer features an animal has in common with me, the more strained the argument from analogy becomes. For example, the argument wouldn’t work well with an earthworm which has physical and behavioral features that are very distant from mine.

             The argument from analogy might also work with robots: the more human-like they become in their capacities to process information and interact with the world, the more we may entertain the possibility that they are conscious. But whether we are talking about humans, animals or robots, the argument from analogy can never show with absolute certainty that the other being in question is conscious. The fact remains that I am only ever directly acquainted with my own consciousness, and never anyone else’s. That being so, the best I can ever do is speculate about the existence of other minds with varying degrees of confidence.




In 1968 a 24-year-old man named Sirhan Sirhan was arrested and convicted for the assassination of U.S. Senator and presidential candidate Robert F. Kennedy. Some years later, during one of his many unsuccessful parole hearings, Sirhan said that he was no longer the same person that he was decades earlier. Time had changed him, he believed, to the point that he no longer identified with his younger self. He was such a radically different person that his continued imprisonment was pointless. The parole board was unmoved, and sent him back to his cell. Even if Sirhan did go through changes in his personality over time, he is still at bottom Sirhan Sirhan, the same person who performed the assassination some decades earlier. What is at issue in this dispute is how we determine a person's identity. What specifically are the criteria or characteristics that give each of us our identity, and allow us to recognize each other through our various changes over time? There are two common approaches for determining identity: one that looks to the human body, and one that looks to the human mind.


The Body Criterion

The body criterion holds that a person's identity is determined by physical features of the body. In our daily lives we identify people by physical characteristics, such as their facial features and the sounds of their voices. Crime investigators rely on more technical physical features like fingerprints, voice patterns, retinal scans, and DNA, which are physical attributes that we carry with us through life. These help law enforcement officials to know whether they have got the right person in their custody. The body criterion is also helpful in determining identity when a person’s mental features are radically altered. Suppose, for example, that you had a head injury which caused you to lose all of your memory and go through a complete personality change. Or, suppose that you have multiple personalities and every few hours you take on an entirely different persona. In each of these cases, your body designates your identity, and not your mind.

             The body criterion does not assume that your identity rests within your specific material substance, such as the specific atoms that make up your body at this exact moment. Most of the physical components within your body will in fact be replaced over time such as when you regularly shed skin. What is important, though, is the underlying physical structure of your body that remains the same. As the atoms within your body come and go, your body retains a consistent structural form that is central to your identity.

             As compelling as the body criterion at first seems, it is quickly undermined by two counterexamples. The first involves identical twins: they are clearly different people, yet share much of the same physical structure. Their DNA is exactly the same, which means that their bodily composition, facial features and voice may be virtually indistinguishable. A common hoax that identical twins play is assuming the identity of the other, fooling even the closest friends and family members. Human cloning, which is essentially creating identical twins through genetic technology, presents us with the same problem. That is, we have two uniquely different people with parallel physical structure. It seems, then, that physical structure alone is insufficient for establishing one's personal identity over time.

             The second counterexample is the mind-swap scenario. Suppose that, while in prison, Sirhan secretly had an operation in which his thoughts and memories were extracted from his brain and injected into the brain unsuspecting guard named Bob, and so too from Bob's brain into Sirhan's. Thus, Sirhan's mind is in Bob's body, and Bob's mind is in Sirhan's body. The Warden discovers what happened, and now he has to decide which one of the two men stays locked in the prison cell, and which one gets to go home at the end of the day. Commonsense tells us that Sirhan's personal identity is with his mind, not with his physical body, and that we lock up whatever person has Sirhan's mind. This mind-swap scenario tells us that what is truly important about personal identity is the mind, and not the physical body. This reflects how we normally view our bodies: I think of myself as having a body, and not simply being a body. With both of these counterexamples, then, it seems that physical structure alone is insufficient for establishing one's personal identity over time.


The Mind Criterion

The mind criterion now seems like the obvious choice for designating the presence of our unique identities. On this view, regardless of what happens to my body, my real identity is infused into my mind. Unfortunately, the issue is not that easily settled. A first problem is finding the specific mental qualities that carry my identity through life's ever-changing situations. How about my personal memories: aren't they very much my own in a way that defines my identity? It is true that some people may share many of my experiences, as when I attend a concert along with 10,000 other spectators. Even so, my memory of the concert will be from my perspective with my personal reactions. It this personalized and autobiographical aspect of my memory that constitutes who I am. But there is still a difficulty with locating identity within our memories. Suppose that a scientist hooked me up to a memory-extracting machine that was able to suck the memories directly out of me and inject them into someone else. I would still be me and the other guy would still be himself, regardless of where my memories went. Situations like this in fact occur, and a famous one involves a patient named "Boswell" who, after an illness, suffered from both extreme amnesia and short-term memory loss. Boswell remained verbally articulate, though, and had a clear conception of his identity.

             Ok, maybe it is not my memories that define my identity. What about my dispositions, such as my set of desires, hopes and fears. These uniquely reflect my experiences, such as my hope that science will someday cure cancer, or my fear of heights. Further, dispositions are long-term, and so they can endure any changes imposed on my body or my memory. For example, I've always been afraid of heights, and probably always will be. However, while some dispositions may indeed be long-term, many are by no means permanent. In fact, as I moved from my early years to adulthood, it is possible that most of my dispositions will change, especially the most important ones. This is exactly the point that Sirhan Sirhan was making before his parole hearing: while he might have been an angry and violent person in his youth, time mellowed him to the point that he became a responsible and gentle person. Dispositions, then, are not the principal designators of my identity. As we hunt for other possible mental qualities that house our identities, we will be equally disappointed.

             A second problem with the mind criterion is that it is difficult for me to perceive any unified conception of myself at all. Scottish philosopher David Hume (1711-1776) presents this problem. He says that when he tries to hunt down his identity by introspectively reflecting on his mental operations, he cannot find any “I” or “self” within his mind. All that he detects is a series of separate experiences: the sound of a dog barking, the visual image of a bird flying, a memory of an event from childhood. The mind, he says, is like a theatrical stage where things appear, move across, and then disappear. There is no unified self that we perceive through these successive experiences. This does not necessarily mean that we have no unified self; it just means that we cannot discover it by introspecting on our own minds. Many philosophers after Hume have disagreed with him and attempted to show how introspection might in fact reveal a unified self. However, the inherent problems with the reliability of introspection keep making Hume's criticism reemerge.

             So, the mind and body criteria both have serious problems. Does this force us to abandon the whole idea of personal identity? Not necessarily. Part of the problem stems from the assumption that we must find a one-size-fits-all criterion of personal identity, one that works in every situation in which the idea of personal identity arises. But if we look at the different contexts in which we use the notion of personal identity, we see that we are very often looking for entirely different things. In criminal cases, the body criterion is what matters most. Investigators do not care whether someone like Sirhan has psychologically changed a thousand times over. What matters is whether they have the correct human body locked behind bars. By contrast, when I am talking to a friend who is an identical twin, it does not matter that he has the same bodily structure as his brother. What matters is his mind, and whether I can pick up the thread of a conversation that I was having with him the day before. Further still, when I reflect on what connects me now with who I was as a child, I am specifically interested in the question of how change impacts my identity, which is a question which is not relevant in the first two examples. In this case, my bodily structure and memories are both relevant, and so I draw on elements of both the body and mind criteria to work out a conception of my identity. There is, then, no single simple criterion of personal identity, and the context of our situation will dictate the relevance of either the body or mind criteria.


Life after Death

One major puzzle regarding personal identity is the notion of life after death, that is, whether my personal identity survives the death of my physical body and lives on in some other form. There are various views of the afterlife, often wildly different from each other. The philosophical question is whether our identities would be preserved in any meaningful way as we make the transition to the hereafter. Our focus here is not on whether any of these views are true, which like other areas of religious philosophy stubbornly defy proof or disproof. Rather, it is a question of whether concepts of the afterlife are compatible with the continuation of our personal identities. We will look at three notions.

             The first of these is reincarnation, the view that one’s present life is followed by a series of new lives in new physical bodies. Upon the death of my present physical body, my identity moves on and takes residence in the body of a newborn baby. When this new body grows old and dies, my identity moves on to yet another, and the cycle continues. One Hindu religious text compares it to people changing clothes: “As a person throws off worn-out garments and takes new ones, so too the dweller in the body throws off worn-out bodies and enters into others that are new” (Bhagavad Gita, 2.22). Life after death, then, is a series of extensions of my present life right here on earth, not a relocation of my identity to some higher heavenly realm. The question for us is this: as my identity migrates from one body to another, is my identity preserved? Right off, it is clear that reincarnation fails the body criterion: none of the physical structure of my old body is preserved in the new one. In fact the structure of the two bodies could not be any more different. They are born of completely different parents, so there is no DNA commonality. In my second body I might be of an entirely different race, gender, and body build. Some versions of reincarnation maintain that I might even come back in the body of an animal. In any case, neither I nor anyone else would be able to identify me on the basis of my new body. The story is much the same when we turn to the mind criterion. In my new body, I will have completely new memories, a different set of dispositions, and no real way of knowing who I was in my previous life. The only aspect of my mind that might carry over would be my consciousness: the "I" that is aware of the world. In every other respect, though, I am a completely new person. Reincarnation, it seems, is not a good mechanism for retaining our identities in a meaningful way.

             A second view of the afterlife is that, upon the death of my physical body, a new perfect body is created from me that is made of a heavenly substance, and I continue living in that new form. We will call this the ethereal body theory. The presumption here is that, at the moment of my death, everything about my personal identity that is encoded in my present physical body is copied over into the new ethereal body, such as my physical appearance and my brain patterns. My identity is in a sense rescued from my dying body and integrated into the new one. On face value, the ethereal body theory seems to successfully meet both the body and mind criteria of personal identity. My new body would have the same physical structure as the old one, although made of a somewhat different substance, and my mind would retain all of my memories and dispositions. On closer inspection, though, there is a serious problem: the new "me" would actually be an independent copy with its own distinct identity. In the movie Multiplicity, a man named Doug gets himself cloned. When he and his clone wake up from the procedure, they both think that they're the original Doug. The scientist performing the procedure then reveals which is the original and which is the clone. The clone, then, accepts the fact that he is a different person, an identical twin of Doug. The ethereal body theory faces this same problem. At death, I am essentially cloned in a new form. The clone, though, is not really me, but a different person with a body and mind copied from me. I die and decompose here on earth while my clone lives on in the afterlife. Thus, the ethereal body theory does not offer an effective mechanism for retaining the identity that I have now in my present physical state.

             A third view of the afterlife is that of disembodied spirit. When I die, my mind is released from my physical body and continues to live in a non-physical realm. The presumption here is that my mind is composed of a unique non-physical, non-three-dimensional substance that we commonly call “spirit”. This may not be the best term since it is loaded with religious connotations, so for clarity we will use the phrase “spirit-mind”. Thus, according to the disembodied spirit view, while I am alive on earth my spirit-mind and body are joined, and when I die they are separated. What is released from my body is not my mental clone: it is the real me as I am right now as a spirit-mind. It is just that I no longer have my body. The disembodied spirit theory clearly fails the body criterion of personal identity: upon death, our spirit-minds have no body at all. However, it passes the mental criterion with flying colors: everything about my mental identity, including memories, dispositions, and consciousness, is preserved upon my death as my spirit-mind lives on. The problem that this theory faces, though, is not so much a conceptual one, but a scientific one. Is my mind really a non-physical spirit that is linked with my body right now, but will separate from it upon my death? This involves a philosophical issue called the mind-body problem, which we turn to next.




The mind-body problem in philosophy is an investigation into how the human mind and human body are related to each other. There are two general strategies for explaining their relation. First, mind-body dualism is the view that human beings are composed of both a conscious spirit-mind and a non-conscious physical body. Second, mind-body materialism is the view that conscious human minds are the product of physical brain activity, and nothing more. We will first consider mind-body dualism.


Dualism’s Assets and Liabilities

A woman named Rebecca was injured in an automobile accident, and as paramedics were placing her in the ambulance she had a near-death experience. As she later reported, she felt that her conscious mind left her body and slowly rose above it. From that position, she could look down on her own body and watch paramedics move her onto the stretcher. Her mind then began rising higher and higher towards a bright light. Rebecca's near-death experience is a vivid way of depicting the view of mind-body dualism. During our normal lives, our physical bodies and spirit-minds are connected and work harmoniously with each other. Upon death, though, the two are separated: our bodies die and our spirit-minds move on to another realm. One of the great assets of dualism is its ability to account for an afterlife, as we just saw. If my mind is composed of spirit, then after my death my consciousness could continue to exist in a spirit realm.

             Aside from its asset as a possible account of life after death, mind-body dualism also effectively accounts for the essential differences between mind and body. We have seen that minds presumably have the features of privateness, non-localizability and intentionality; mere bodies seem to lack these three features. We can thus formulate arguments for mind-body dualism based on those differences, such as the following argument from non-localizability:


1. Minds are non-localizable.

2. Bodies are localizable.

3. Therefore, minds cannot be bodies.


Similar arguments for mind-body dualism can be made on the basis of the mind's unique features of being private and intentional: minds are private and intentional, bodies are neither of these, therefore minds cannot be bodies.

             But mind-body dualism faces a serious problem: how the distinct realms of body and spirit relate to each other. The notion of dualism rests on the idea that there are two entirely different realms of existence, a three-dimensional one and a non-three-dimensional one. Where is there any opportunity for the two to connect or intersect with each other? Suppose that I am in the three-dimensional world hunting around for some spiritual being; I will never find it since it cannot be located in space. Suppose instead that I am in the non-three-dimensional world looking for some physical thing: I will never find it because that physical thing is located in space, which I am not a part of.

             The problem is most relevant when we consider the two primary ways in which our minds and bodies relate to each other, namely sensory perception and bodily movement. Suppose that while walking through the woods, I spot a hissing rattlesnake (a sensory perception that I have), after which I turn and run (a bodily movement that I initiate). Consider first what is involved with my sensory perception of the snake. My physical eyes pick up an image of the snake, which is converted into biochemical impulses in my three-dimensional brain. At some point the physical data about the snake triggers my conscious sensory perception of the snake. The mind-body dualist must explain how the bio-chemical data magically jump up from the physical realm of my brain into the spiritual realm of my mind. In essence, that data must cross a barrier between two distinct realms:


spirit-mind: conscious perception of hissing snake


physical brain: biochemical data of hissing snake


Consider next what is involved with my bodily movements when I turn and run. I have a sensory image of a hissing snake, which makes me desire to move to a safer location. I then mentally command my body to run, which triggers a bio-chemical reaction in my brain, which in turn makes my muscles move. The mind-body dualist must now explain how my mental command to run magically jumped across a barrier from the spirit realm of my mind to the physical realm of my brain:


spirit-mind: conscious desire to run from snake


physical brain: biochemical data causing muscle movement


Defenders of mind-body dualism recognize both of these challenges and offer different explanations, which we will consider next.


Interactive Dualism

One theory is interactive dualism, which aims to discover a precise mechanism which allows our physical brains to interact with our spirit-minds. A leading champion of this approach is French philosopher René Descartes (1596-1650). Descartes knew enough about human anatomy to recognize the role that the human brain plays in conveying signals down our spinal cords and through our nerves to all parts of our bodies. If there is a master switchboard between our bodies and spirits, Descartes thought, it must be hidden somewhere in our brains. It also must be a single point in the brain that unifies the diverse signals that travel up and down our nerves. After some hunting, he suggested that it is the pineal gland. This unique gland sits at the most inward parts of our brains, in the middle of the right and left halves, and this precise physical location makes it the obvious candidate. Thus, it is the pineal gland that shuttles data across the barrier between our physical brains and spirit-minds:


spirit-mind: non-physical conscious thought about snake

——————————pineal gland——————————

physical brain: physical biochemical data about snake


             There are two problems with Descartes’ theory. First, we know now that the pineal gland is not the brain’s master switchboard. In fact, it is not even part of the brain, and its function is to regulate a bodily hormone. Descartes did what he could with the scientific knowledge of his day, but it was not good enough. If we continue his hunt for an alternative master switchboard in the brain, we will be disappointed. It seems that there is no central location in the brain that receives all sensory information and initiates all bodily actions. Second, Descartes’ theory does not explain how the pineal gland bridges the barrier between the physical and spirit realms. Suppose that we could find a part of the brain where all its signals converged. We would still have to explain how information jumps back and forth from that physical piece of the brain to our spirit-minds. It is one thing to say “here’s the spot” and quite another thing to explain the mechanical details of how it carries out its task.

             A second version of interactive dualism is that God shuttles information back and forth between my physical brain and spirit-mind, a view defended by French philosopher Nicholas Malebranche (1638-1715). Malebranche examined different explanations of brain-spirit interaction and felt that they all failed for one basic reason: the physical and spirit realms are so radically different from each other that there is no neutral territory for them to interact. Think of what it would take to turn a three-dimensional brain impulse into a non-three-dimensional perception in my spirit-mind. It would be as impossible as creating something out of thin air: there is no mechanism for doing this. It would require nothing less than a miracle to accomplish that task. That, according to Malebranche, is where God comes in. Return to the hissing rattlesnake example. My eyes and ears pick up the sensory information about the snake, which triggers a bio-chemical reaction in my physical brain. God, who is watching all things, sees this physical reaction in my brain and makes a non-three-dimensional copy of it which he injects into my spirit-mind. When I decide to turn and run, God detects these wishes within my spirit-mind, and then triggers the appropriate bio-chemical reaction in my brain to get my muscles to move. Thus, God is the mysterious switchboard that shuttles data back and forth between my physical brain and spirit-mind:


spirit-mind: non-physical conscious thought about snake


physical brain: physical biochemical data about snake


             Relying on God to bridge the two realms is a convenient solution. The problem is that it is too convenient. While it might at first seem that the solution to the mind-body dilemma requires nothing short of a miracle, that is giving up a little too easily. As long as there are non-miraculous solutions available, they need to be explored first, and there are plenty more that Malebranche had not yet considered. If we followed his advice, then we might fall back on divine miracles as an explanation for anything that baffles us at the moment. A scientist in his day might speculate about why objects always fall downward as if by magic, and conclude that what we call "gravity" is just God pulling small objects towards the earth. This is not a good way of doing either science or philosophy.

             A third version of interactive dualism, called epiphenomenalism, holds that our bodies are completely self-reliant machines that can move, speak, and have brain activity, and our spirit-minds are only spectators that observe what our bodies are doing. The theory was first proposed by French philosopher Charles Bonnet (1720-1793), and he states, “Give the automaton a soul which contemplates its movements, which believes itself to be the author of them, which has different volitions on the occasion of the different movements, and you will on this hypothesis construct a man” (Essay on Psychology, 1755). Epiphenomenalism is only a one-way interaction that delivers sensation from our physical bodies to our spirit-minds, but does not initiate any bodily movements in return from our minds to our bodies. The motivation behind this theory was scientific advancement: we want to ascribe as many human functions to our self-sustaining bodies as we possibly can, including the formation of unconscious brain activity in its fullest rational capacity. The only task remaining for our spirit-minds is to be consciously aware of that brain activity. A common analogy for epiphenomenalism is that of a train with a smokestack: the smoke is produced by the train’s engine, but has zero effect on the engine itself. Similarly, our bodies and brains produce conscious thoughts in our spirit-minds, but our minds have no control or influence over what our bodies and brains do. Thus, with our example of the hissing snake, the data flow about the hissing snake only moves upwards from my physical brain to my spirit-mind, but not in the reverse direction from my spirit-mind to my brain:


spirit-mind: non-physical conscious thought about snake

—————————upward data flow only———————

physical brain: physical biochemical data about snake


             There are two difficulties with epiphenomenalism. First is that it does not fully solve the mind-body problem. By having only a one-direction interaction between a brain and its spirit mind, epiphenomenalism indeed solves half of the problem: there is no downward flow of data from spirit to brain that needs to be explained. However, there remains the problem of how data flows upward from the brain to the spirit, for there still is that barrier between but the three-dimensional brain and the non-three-dimensional spirit. Thus, epiphenomenalism is not that much of an improvement over Descartes’ theory and his failed attempt to find a connecting link between the two realms somewhere within the brain. A second problem with epiphenomenalism is that, in the interest of scientific advancement in our study of human nature, we sacrifice any control over our bodies. I am a mere spectator of my conduct, with no conscious power to control what my body does or my mouth says. It’s as though my mind has been drugged and kidnapped by my body and passively goes along with whatever my body does. This runs contrary to my natural conviction that my conscious mind has at least some control over how my body speaks and behaves. Epiphenomenalism thus seems like a theory of last resort that we should adopt only if no better solutions to the mind-body problem are available.

             A fourth version of interactive dualism, called gradualism, is a little more successful in explaining the details of mind-body interaction, without falling back on divine intervention. According to gradualists, Descartes and Malebranche made a faulty assumption about the physical and spirit realms, namely, that they are radically different in kind from each other, and there is no overlap between the two territories. Physical things are in the physical realm, spirit things are in the spirit realm, and that is that. Instead, the gradualist argues, body and spirit fall into the same category of stuff and differ only in degree not in kind. British philosopher Anne Conway (1631-1678) argued that bodies and spirits lie on a spectrum of lightness and heaviness. In her words, "these middle bodies are indeed the links and chains by which the soul, which is so subtle and spiritual, is conjoined with a body so gross" (Principles, 1790). Picture a scale from 1-5, where 1 is the heaviest physical body, and 5 is the lightest spirit. An example of 1 might be human bones, and a 5 might be the disembodied spirit-mind of a deceased person. Between these two extremes, though, we have lighter bodies and heavier spirits. When we are mid-range at 3 or 4, the difference between spirits and bodies are negligible: both are wispy, airy substances that have only a little weight. According to Conway, it is at this level that body and spirit interact with each other. Just as a gentle wind can move the massive arms of a windmill, she argues, so too can heavy spirit move a light body.

             Conway does not commit herself to a specific physiological explanation of how physical brains and spirit-minds interact, but we can speculate. Perhaps, for example, the electric charges in our brains stimulate an aura of heavy spirit that surrounds our heads. This aura, in turn, interacts with our conscious minds which is even lighter. On our scale of 1-5, the interaction between my body and spirit might involve an interplay between bodies and spirits at the following levels:


Level 1: Bones (heavy body)

Level 2: Nerves from brain (medium body)

Level 3: Electrical charges in brain (light body)

Level 4: Aura around our heads (heavy spirit)

Level 5: Conscious minds (medium spirit)


The problem with gradualism is that anything we say about spirits would be pure speculation. Yes, there are heavier and lighter bodies in the physical realm, but our knowledge stops there. We have no experience of heavy spirits, such as auras around our heads, that we can scientifically connect to electric charges in our brains or any other aspect of brain activity. If heavy spirits did exist as Conway describes, they would be physically detectible in some way, but we have not yet identified any. Until we do, the gradualist solution falls into the category of "an interesting idea" but there is not much we can do with it beyond that.



All of the above theories of dualism assume that my body and my spirit interact with each other: signals pass back and forth between my physical brain and my spirit-mind. The dilemma that each of these theories face is explaining the precise mechanism which allows the signals to pass back and forth. But there is an alternative explanation that rejects the assumption that the two realms interact with each other. According to the dualist theory of parallelism, bodies and spirits operate in their own realms, and have no causal connection or interaction with each other whatsoever. Imagine, for example, that a parallel universe exists which is exactly like ours, an idea that is often explored in science fiction. Assume that it had the same stars and planets, the same physical layout of their "earth", and the same people who behaved exactly like each of us. Their universe had a George Washington just like ours, and it has a version of me, a version of you, and a version of everyone else in it. The resemblance is so perfect that if you visited that universe you could not tell the difference. We may not understand why this parallel universe even exists, but we trust that it is simply the way that the course of nature emerged.

             Let's now tweak the parameters of these two universes just a little. Suppose that everything in our universe has a slightly blue tint to it that was almost undetectable. The parallel universe, though, has a slightly green tint to it. Aside from the tiny difference in color tint, the two universes are exactly the same. Let's now make a more dramatic change to the two universes. Suppose that our universe is composed only of physical stuff, with no spirit component at all. People still walk around, talk with each other and work at their jobs, but it is only their unconscious physical bodies operating. Turning to the parallel universe, we will make the opposite alteration: it is composed of spirit, with no material substance at all. While people do not walk around in a three-dimensional physical realm, everything there exists in a strange spirit form: rocks, trees and rivers as well as people. The two universes still run in perfect coordination with each other, its just that ours is made of physical stuff and the other of spirit stuff.

             This last conception of the parallel universes is the dualist theory of parallelism offered by German philosopher Gottfried Wilhelm Leibniz (1646-1716). According to Leibniz, I have an unconscious body that walks around in the physical universe, and a conscious mind in the spirit universe. Because the two universes operate in complete harmony with each other, there is no need for my physical brain to interact with my spirit-mind. The parallel nature of the universes themselves guarantees that they will operate in perfect synchronization. Leibniz writes,


The soul follows its own laws, and the body likewise follows its own laws. They are fitted to each other in virtue of the pre-established harmony between all substances since they are all representations of one and the same universe. [Monadology, 1721]


For example, in the physical universe, my physical body walks through the woods and stands before a hissing rattle snake. The physical perception of this triggers a mechanical reaction in my brain, which causes me to turn and run. At the same time in the spirit universe, my mind has a visual image of my body walking through the woods and seeing a rattlesnake. I experience the mental sensation of fright and the desire to run. My mind then has a visual image of my body running back down the path. Thus, in the physical universe my encounter with the snake involves only physical stuff with no mental experiences taking place. At the same time, in the spirit universe my encounter with the snake involves only my mental experiences, with no physical stuff being present. Accordingly, events going on in my spirit-mind and physical brain parallel each other, but do not interact with each other:


spirit-mind: non-physical conscious thought about snake

———————————no interaction————————

physical brain: physical biochemical data about snake


             Parallelism is probably the most extravagant attempt by dualists to explain the relation between physical brain activity and spirit consciousness. But the theory has two problems. Like Conway's theory of gradualism, Leibniz's parallelism is pure conjecture with no scientific evidence that a parallel universe even exists. As clever as parallelism is, we need some reason to think that it reflects the way that things actually are. There is a second and more fundamental conceptual problem with parallelism: since the two universes run independently of each other, there is no need to have them both. Suppose that the physical universe was destroyed in a cosmic explosion, but the spirit universe remained untouched. Our conscious minds in the spirit universe would continue as if nothing happened. I would still have mental experiences of talking to people, going to work and running from snakes. What happens in the distant and unconnected physical universe is of no concern to my conscious spirit. The only thing that matters is that my consciousness of the world continues in the spirit universe, which it would with or without the physical universe. Thus, parallelism fails for making the physical universe a useless appendage to the spirit universe.




When examining the different versions of mind-body dualism, it becomes clear that we know far more about the physical world than we do about the mysterious spirit world, if the spirit world even exists. We can construct experiments to investigate the physical world, which we cannot perform on the spirit realm. The alternative to mind-body dualism is mind-body materialism, the view that conscious minds are the product of physical brain activity, and nothing more. This means that, when we investigate human consciousness, we need to look no further than the physical realm and the operations of the human brain. This is the assumption made by the sciences of biology and psychology when they attempt to unravel the mysteries of the human mind. It is also the assumption behind cryonics: I preserve my mind by preserving the chemical patterns in my brain through cryonic freezing. In this section we will look at defenses of mind-body materialism and different accounts of how our conscious minds are related to our physical brains.


Arguments for Mind-Body Materialism

Philosophers since ancient times have defended the theory of mind-body materialism, and we will consider three important contributions. First is by Roman philosopher Lucretius (99-55 BCE), who presented an argument for materialism from the interdependence of mind and body. Everything we know about our human minds suggests that it is inseparably intertwined with our bodies. For example, the mind is born with the body, grows with it and becomes weary and worn with age. He writes,


As children totter with feeble and tender bodies, they also have weak judgement of mind. Then, as they grow and their strength hardens, their sense is greater and their force of mind is increased. Later, when the body becomes shattered by the stern force of time, and its frame has sunk with its strength dulled, its reason is also diminished, its tongue raves, its mind stumbles, and everything gives way and fails at once. [On the Nature of Things, 3b]


Further, he argues, the mind can be cured with medicine, just like the sick body. This interconnection between our minds and bodies cannot be adequately accounted for by mind-body dualism, and the most natural explanation is that our minds are simply parts of our material bodies. Modern neuroscience has done much to confirm Lucretius’s observations. The development of children’s mental abilities correlates with the development of specific parts of the brain. Similarly, declining mental abilities correlates with damage to specific parts of the brain. We are now so confident with the link between brain and mind that say without hesitation and a specific type of brain injury will cause a specific type of mental impairment. More formally, Lucretius’s argument for materialism from the interdependence of mind and body is this:


1. If mind-body dualism is true, then the growth and health of our spirit-minds would be independent of the growth and health of our physical bodies.

2. It is not the case that the growth and health of our spirit-minds is independent of the growth and health of our physical bodies.

3. Therefore, mind-body dualism is false, and, thus, mind-body materialism is true.


             A second argument was offered by British philosopher John Locke (1632-1704) who maintained that, from a religious perspective, mind-body materialism is every bit as good at explaining life after death as mind-body dualism is. We’ve already looked at the dualist’s explanation of life after death: while alive on earth, my spirit-mind exists along side my physical body, and upon the death of my body that very same spirit-mind continues to exist in the afterlife. According to Locke, however, it is within God’s power to create my conscious human mind entirely out of material stuff. Upon the death of my earthly body, then, God will recreate my mind in a new physical body, and, in that new state, reward or punish me as I deserve. In Locke’s words, God “can and will restore us to the like state of sensibility in another world, and make us capable there to receive the retribution he has designed to men, according to their doings in this life” (Essay, 1690). Locke thus endorses the “ethereal” body account of life after death that we examined earlier, and his point is that religious believers have no reason to reject mind-body materialism on religious grounds. Locke does not address the cloning problem with the theory of the ethereal body that we discussed earlier. Nevertheless, his suggestion opened the door for many religious philosophers after him to embrace mind-body materialism without feeling like they needed to reject their faith.

             A third argument for materialism in more modern times targets the three features of the conscious mind that we’ve discussed: privateness, non-localizability and intentionality. While these three characteristics have been used to support the theory of mind-body dualism, the materialist theory questions the dualist’s assumptions in each of these. First, mind is not as private as we might assume, and, at least in theory, you can discover everything relevant about my mind through my behavior or physiological monitoring. Second, mind is indeed localizable, and its location is within the brain. One recent study suggests that its primary spot is a small region in the brain's center, the central lateral thalamus. But even if this is wrong and my mind’s exact location within my brain cannot be narrowed down to a single cluster of cells buried within my brain’s various layers, there are other theories. Some experiments suggests that consciousness may be located within the few regions of interconnected neural activity about the size of your ear, located on the surface of the neocortex. Third, assuming that intentionality is a genuine feature of the conscious mind, it begs the question to say that no purely material thing is capable of intentionality. We may have already reached the point in neuroscience to say with confidence that at least one type of material thing is capable of intentionality, namely the human brain, and at least some animals. It remains to be seen whether artificial intelligence can develop to a point where we can say this about a second type of material thing, namely, a sophisticated computer.

             Even if the case for materialism looks stronger than that of dualism, this does not completely solve the mind-body problem: it only narrows our search by rejecting the concept of a spirit-mind.  The hard problem of consciousness noted at the outset of this chapter remains: how do the bio-chemical components of our brains generate the conscious experiences that we have? The conscious experience of pain is a good example. If I drop my cell phone and break the screen, the phone itself does not feel pain. But if I trip and break my leg, I surely do. Even if we know all of the physiological details about pain perception, we are less clear about how my conscious experience of pain emerges from my physiology. Much of the modern discussion of the mind-body problem focuses on this issue. That is, it assumes that materialism is true, but seeks to address the hard problem of how a conscious experience like pain can be a product of brain activity. We will next look at four closely-related theories that attempt to solve this problem: behaviorism, identity theory, eliminative materialism, and functionalism.



The first materialist theory is behaviorism, which connects mind with observable human behavior. Suppose that you were assigned the task of explaining how an ATM machine works. You have no instruction manual for it, and you are not allowed to disassemble the machine to analyze its parts. All that you can do is observe how it operates. You put in your ATM card, hit some numbers, and wait to see what happens. That is, you input a stimulus into the machine and wait for a response. You vary the stimulus each time and note how this affects the behavior of the machine. Punching in every conceivable set of numbers, you eventually learn how the machine works, based entirely on how the machine behaves after different stimuli.

             The behaviorist theory of the human mind follows this approach. Nature has not given us an instruction manual for how the mind works, and we are limited with how much we can learn by opening up a person's skull and poking around inside. What we can know is your observable behavior and how you respond when exposed to different stimuli. I hand you a bag of potato chips, and I see how you respond. I then hand you a bag of dog food, and see how you respond. The more experiments that I conduct like this, the more I know about your behavioral dispositions, that is, the ways that you tend to behave. Eventually, I am able to form conclusions about even your most hidden mental states. Happiness for you involves your behavioral disposition to smile and be friendly to other people, whereas sadness involves your behavioral disposition to frown and withdraw from other people.

             In short, the behaviorist view of the human mind is that mental states are reducible to behavioral dispositions. This theory was originally forged by psychologists in the early twentieth-century who wanted the field of psychology to be more "scientific", like the field of biology which deals only with observable facts about the world. The most extreme versions of behaviorism are thoroughly materialist: first, they reject the dualist assumption that our minds are composed of spirit, and, second, they restrict mental states to the physical realm of behavioral dispositions.

             British philosopher Gilbert Ryle (1900-1776) felt that the psychological theory of behaviorism could help solve the philosophical puzzle about the relation between the mind and body. Critical of Descartes, Ryle argued that the old dualist view rested on a faulty conception of a ghost in the machine. The "ghost" component of me presumably involves my innermost private thoughts that occur within my spirit-mind. Only I have access to them, and outsiders cannot penetrate into my mind's concealed regions. The "machine" component of me involves my physical body, which is publicly observable and outsiders indeed can inspect. On this view, according to Ryle,


A person therefore lives through two collateral histories, one consisting of what happens in and to his body, the other consisting of what happens in and to his mind. The first is public, the second private. The events in the first history are events in the physical world, those in the second are events in the mental world. [The Concept of Mind, 1949]


 Descartes' error, according to Ryle, was the assumption that the human mind is private and completely hidden from outside inspection. Ryle argues instead that my mind is not really private: you can access it by observing my behavioral dispositions. All of my so-called "private" mental states can in fact be analyzed through my public behavior, and are nothing more than predictable ways of acting. Take, for example, my belief that "it is sunny today." Descartes would view this as a private conviction that occurs within my spirit-mind. For Ryle, though, this belief only describes dispositions I have to behave in specific ways, such as wearing sunblock, going swimming, and saying "it is sunny." In short, our minds our only behavioral dispositions that are part of the observable physical world, and are not non-physical entities mysteriously hidden within our bodies.

             One criticism of behaviorism is that some of my mental events really do seem completely private to me. Suppose that I step on a nail, which causes me great pain. The behaviorist watches how I react and makes lists of behavioral dispositions that I display. I say "ouch", I have a look of anguish on my face, I stop what I am doing and tend to my injury, I am irritable towards others. While all of these observations may be accurate, the behaviorist has left out one critical element: the actual pain that I am feeling. The experience of pain is mine alone, and, while outsiders can see how I react to pain, they cannot access my pain. In addition to pain, I have many other experiences throughout the day that seem private, such as seeing a bright light, or hearing a song. These experiences involve more than the behavioral dispositions that I display. Thus, the behaviorist theory fails because it pays too much attention to the observable part of me while dismissing what goes on inside of me.


Identity Theory

A second materialist approach to the mind-body problem is identity theory, the view that mental states and brain activities are identical, though viewed from two perspectives. This is not a claim that mental states and brain activities are just correlated with each. Rather, it is a claim that the two are genuinely identical with each other as when we say that Mark Twain is Samuel Clemens. Like behaviorism, identity theory is a materialist view of the mind insofar is it maintains that mind is essentially physical in nature. But, while behaviorism focuses on observable physical behaviors, identity theory targets the physical human brain. There are two components to identity theory, the first of which is the contention that consciousness is an activity of the human brain. While brain science is still in its infancy, theories abound describing where specific mental states are produced in the brain. Suppose, for example, that I place you in a brain scan machine that displays your neural activity. I give you a math problem to solve, and neural activity increases in one part of your brain. I have you listen to music, and neural activity increases in another. Through experiments like these I identify your conscious experiences with specific brain activities. While philosophers are less concerned with the physiological details of brain activity, what is philosophically important is the suggestion that we can identify specific mental states with specific brain activities.

             The second part of identity theory is the contention that mental phenomena can be viewed from two perspectives. Suppose that you are looking at a sunset. On the one hand, you have the visual and emotional experience to what you are viewing. On the other hand, there is the bio-chemical activity within your brain, which would involve the language of brain sections and firing neurons. The event described in both cases is exactly the same, and it is just a matter of viewpoint. This is analogous to how the terms "President of the Senate" and "Vice President of the United States" both have different meanings, yet refer to the same thing. Take, for example, John Adams. As the first "Vice President of the United States," he had a specific job description, most notably to take over if the President died. As "President of the Senate" he had the job description of presiding over the Senate. Both of these roles describe the identical person, namely John Adams, but from his different job descriptions.

             There are two problems with identity theory. First, according to dualists, the descriptions that we give of mental experiences and brain activities are so radically different, and even incompatible, that they do not seem to refer to the same thing. Suppose that I am watching the sunset; I first describe it from the perspective of my mental experience and then from the perspective of the brain scientist who conducts a brain scan on me. From these two viewpoints, I will have two incompatible lists of attributes, based on the three features of mental experience that we noted earlier:


Mental Experience of Watching a Sunset

I privately experience it

It is not localizable in space

It is about something

Brain Activity Triggered by Watching a Sunset

It is publicly observable

It is localizable in space

It is not about something


As indicated on the above list, my mental experience of the sunset is a private experience within my own consciousness. I might display some behavior, such as saying, "Now that is beautiful!" Still, my experience itself is private. Also, I cannot point to a location in three-dimensional space where this experience takes place. Finally, my mental experience is also about something, namely, about the sunset itself. The three features of my brain activity, though, will be the exact opposite of these. My brain activity is publicly observable by scientists. My brain activity is localizable in space: the scientist can point to the exact spot where the biochemical reactions occur. My brain activity is not "about" anything; it is simply some biochemical reactions that occur. We've already noted that scientifically-minded materialists today may dispute the three criteria of privateness, localizability and intentionality. However, the larger point of the dualist's critique is this: if mental states and brain activities really were identical, the two lists would be more compatible. The fact that they are so contradictory implies that they are really different things.

             The second major problem with identity theory is that it restricts mental experiences to biological organisms with brains. The central contention of identity theory is that mental states and brain activities are identical. This means that if thing has a mental state like pain, then that thing must have an appropriately structured brain. Isn't it possible, though, that an extra-terrestrial being with a different biological structure could have a mental state of pain? American philosopher Hilary Putnam (1926-2016) states the problem here:


[According to identity theory,] even if such a [brain] state can be found, it must be nomologically certain that it will also be a state of the brain of any extraterrestrial life that may be found that will be capable of feeling pain before we can even entertain the supposition that it may be pain. [“The Nature of Mental States”, 1975]


Further, science fiction abounds with conscious creatures that have no biology at all: computerized robots, crystalline entities, collections of gasses, particles of energy. It is at least plausible that these might have mental states. It seems a bit biased for us to say that mental experiences will only result in creatures that have biological brain activity in the way that we humans do. Identity theory, then, is a very narrow way of understanding mental states.

             But philosophers sympathetic to identity theory have responded to these criticisms by creating two offshoot theories: eliminative materialism and functionalism. We turn to these next.


Eliminative Materialism

Suppose that instead of saying "I believe the sun is setting" you say "I am having brain section 3-G neural states regarding the sunset." Instead of saying to your date, "I think I love you", you say "I am having section 2-J neural states regarding you, with a little sector 4-B activity on top of that." For convenience you might shorten this and say more romantically "2-J and 4-B to you!" This type of vocabulary shift with our understanding of the mind is what the theory of eliminative materialism proposes: ordinary descriptions of mental states should be eliminated and replaced with descriptions of brain activity. The theory is a solution to the first problem of identity theory, namely, that our descriptions of mental experiences and brain activities are inconsistent with each other. For example, my mental experience of the sunset is private, but my brain activity is publicly observable. The eliminative materialist's solution is to throw out all our commonsense notions of mental experiences, sometimes called "folk-psychology". In place of these they recommend adopting the more scientific language of brain activity. The conflict disappears once we have dispensed with ordinary talk about mental experiences that are supposedly "private" or "non-localizable" or "about something".

             Human history is scattered with bizarre prescientific theories that captured the imagination of people at the time, but which we now reject as false. Alchemy is one example, the "science" of turning lead into gold. Belief in ghosts is another. These and hundreds of other theories have been debunked over the years in favor of more scientific theories of how the world operates. According to eliminative materialists, folk-psychology descriptions of mental experiences are just like these. At best they are misleading, since they only capture a sliver of what is really going on in our minds when we, for example, experience a sunset and say, "I believe the sun is setting". At worst folk-psychology is downright false since it assumes that the mind is language-driven, where language is only a recent development within hominid evolution. In either case, as a theory of mind, folk-psychology is destined for the intellectual garbage dump.

             The theory of eliminative materialism was first proposed by American philosopher Paul Churchland, and when it first appeared his critics incorrectly claimed he was saying that we are not really conscious at all, or that some major aspects of our alleged conscious mental states do not actually exist. That is, I may not be any more conscious than a dead human body, in spite of all the words I use to describe my mental states. However, Churchland's actual theory of eliminative materialism is not this frightening. It is not an attempt to deny or "eliminate" our mental experiences themselves. Rather, it is an effort to eliminate outdated everyday ways of describing mentality, that is, the names we commonly give to mental states like desires, beliefs, fears, intentions, and perceptions. We use these linguistic descriptions to not only introspectively understand our mental states, but to predict what's going on in other people's minds. Such descriptions philosophers have called "common sense psychology" or "folk-psychology". As neuroscience progresses, Churchland argues, we will have a much clearer picture of how the brain operates and might eventually adopt the more precise scientific language of brain states. It is not like the government or some science agency will force us to adopt this new scientific language. Rather, we would naturally move towards this clearer description of brain states and reject the mumbo-jumbo of introspective mental experience. Further, we might even use this new language as a way importing information directly into the brain through a neural interface to learn things more quickly, or even have two or more brains communicate non-verbally with each other.

             There are two central contentions of eliminative materialism: first, that folk-psychology notions of mental experiences are like obsolete scientific theories, and, second, we will eventually adopt the language of neuroscience. As to the first contention, eliminative materialism may be correct. Many of our folk-psychology notions of mental experiences are misleading and others are false. In our normal conversations we have mastered maybe a few dozen concepts relating to the mind, such as knowing, wishing, believing, doubting, sensing. But there are probably thousands of distinct mental states with subtle differences that we cannot grasp through pure introspection. We have limited abilities to anatomize the minute workings of our minds by simply sitting down and reflecting on our thought processes. While it may seem to me that my mental experiences are "private" or "about something" or "non-localizable", I may not be capable of accurately making those assessments. The eliminative materialist, then, may be correct that our introspective folk-psychology notions of mental experiences are as erroneous as theories of alchemy.

             As to eliminative materialism's second contention, will we eventually adopt the language of neuroscience to replace our faulty folk-psychology notions of mental experiences? It's conceivable that cognitive scientists could do this. But, for the general public, probably not since this would require memorizing a flood of technical terms for the thousands of subtly different brain states that we have. Getting through the day would be like taking a neuroscience exam. Even if I could memorize the terminology, I am still faced with the task of identifying which brain state I am having at a given moment. Am I experiencing 2-J love, 4-B love, or one of a dozen others? I cannot conveniently carry around a brain scan device to settle the question, and so I will need to engage in introspection and consult my faulty folk-psychology notions of mental experience. One way or another, we are stuck with our introspective notions, as misleading as they may be. Churchland himself acknowledges as much: "the success here imagined for systematic neuroscience need not have any sensible effect on common [linguistic] practice. Old ways die hard, and in the absence of some practical necessity, they may not die at all" ("Eliminative Materialism", 1981). Nevertheless, he argues, at least some group of scientists "should become intimately familiar with the [scientific] vocabulary" of physical brain states.



In an episode of Star Trek, a deranged scientist was nearing death. Desperately hoping to stay alive, he transferred the neural pattern within his brain into an unsuspecting android robot. The plan worked: the scientist's memories, dispositions, and conscious mental experiences were relocated, and he continued living through the android's body. This scenario encapsulates the theory of functionalism, the second offshoot of identity theory. Functionalism is the view that mental experiences are only "functional states," that is, causal relations whereby some external stimuli (e.g., perception of an apple) will cause some mental state (e.g., a belief that the apple is there). The most distinctive feature of functionalism is that mental states would not be restricted to biological organisms with brains, but might also occur, for example, within computers. What matters is that the computer and I have the same functional causal relations taking place, such as when a robot receives the external stimulus of an apple which causes it to have a belief that the apple is there. Thus, mental states are not rigidly dependent on the stuff that a biological organism is made of, and the same experience may be shared by things with different physical makeup. According to functionalists, mental experiences have multiple realization in the sense that minds can be made real in many kinds of physical things. The hardware-software distinction, borrowed from computer science, is a useful metaphor to explain the difference between the bodily occupant (the "hardware") and mental experiences (the "software"). Just as software is a pattern of operation which can run on different types of machines, so too can functional patterns of mental operation run in different kinds of bodies. We noted that one of the shortcomings of identity theory was that it restricted mental experiences to organisms with biological brains. Functionalism avoids this problem by recognizing that mentality may occur in systems or machines other than brains.

             In short, functionalism involves patterns of stimulus inputs that produce mental outputs, which can be run on different types of mechanistic devices, and not just the human brain. What precisely does the functionalist pattern consist of? Functionalists offer different answers. One response, called “black box functionalism” holds that this is a question for neuroscience and not philosophy: philosophers can simply treat the mind as a black box, just as Skinner did, and treat functionalist patterns in terms of stimuli and responses. However, there is a more adventuresome approach called “computer functionalism”, which maintains that the brain is a biological digital computer that runs programs and solves problems the same way that silicon chip computers do. As such, we can learn about how the mind’s functional patterns work by looking at how computers work. Again, the hardware-software distinction is central here and lies at the foundation of a common notion within computer functionalist theories called "the autonomy of psychology".  That is, conscious mental states take place on a psychological level above and neurological brain states. If you want to understand how your word processing program works on your computer, you look at the computer's software, not its hardware. By analogy, if you want to understand conscious thought, you look at psychologically, particularly by analyzing our human language (the "software"), not through neuroscience (the "hardware").

             One of the more interesting versions of computer functionalism proposed by American Philosopher Daniel Dennett (b. 1942) is called “homuncular functionalism”, named after the psychological concept of a “homunculus” or mini-person. According to this view, the functional pattern resembles the hierarchical structure of a large corporation with many workers at different levels, where activity moves up the chain of command until it reaches a top executive level. Take, for example, a company that manufactures furniture. The company as a whole consists of a series of large cooperating units, such as the divisions of manufacturing, shipping, marketing, and maintenance. Each of these divisions consists of sub-units. For example, the maintenance division would be divided into the sub-units of electrical, heating, grounds, and building repairs. Each of these consists of further sub-sub-units; for example, building repairs would be divided between masonry, painting, and plumbing. At the very lowest level would be the activities of each employee. Dennett writes,


If we then look closer at the individual boxes [of people in the hierarchy] we see that the function of each is accomplished by subdividing it via another flow chart into still smaller, more stupid homunculi. Eventually this nesting of boxes within boxes lands you with homunculi so stupid (all they have to do is remember whether to say yes or no when asked) that they can be, as one says, "replaced by a machine." [Brainstorms, 1981]


             Similarly, the functional pattern of operation in a human brain consists of large regions of brain activity, which are composed of sub-regions and sub-sub-regions, each of which perform their own mental tasks, from simple ones at the lower levels to complex ones at higher levels. The lowest neurological level simply involves a series of biochemical on-off switches. Consciousness, then, emerges at the higher levels, while at the same time being driven by lower levels that have progressively less intelligence, on down to biochemical on-off switches. On this view, the pattern of on-off switches can exist in a variety of non-biological mechanisms, such as computers. Regardless of the mechanism that houses these low-level on-off patterns, mental consciousness will emerge at higher hierarchical levels.

             Functionalism is the leading theory of mind-body materialism today, if for no other reason than because a better alternative has not yet emerged. Nevertheless, the view has its critics, and one objection is that it is still too narrow regarding the kinds of things that are capable of having mental states. While functionalism indeed allows for a range of things to house mental experiences, such as brains, computers, robots, they all must be physical. This, though, leaves out the possibility of non-physical mental beings, such as disembodied spirits. Even if human beings are thoroughly physical in composition, could there not be a conscious non-physical thing somewhere in the universe? But defenders of functionalism have a response to this. As long as a non-physical thing is constructed of sub-units and sub-sub-units, then it too could house a pattern of mental experiences. Suppose, for example, that the tiniest spirit unit was just a simple on-off switch; larger spirit units would be composed of these, and the entire spirit collection would be composed of those larger spirit units. Even though the hardware in this case is composed of non-physical spirit, it might have the proper hierarchical structure to take on the patterns of mental experience.


Consciousness and Neurology

Behaviorism, identity theory, eliminative materialism, and functionalism are philosophical theories about whether the commonly-held conception of consciousness is compatible with a materialist view of a human being. If there are incompatibilities, we then consider how we might adjust the philosophical theories of consciousness to make it compatible. But the most important aspect of materialism for us as human beings is its claim that consciousness arises entirely through human brain activity. The explanation for how the brain does this does not rest with philosophers, but scientists, specifically, neurologists. Ideally, we would like the neurologist to hand us a detailed schematic of the human brain, including all 100 billion neurons and the thousands of connections each neuron might have with other neurons. We would also need an explanation of how all those neurons operate, and the programming that goes into its total of 1,000 trillion synaptic connections. This, unfortunately, is beyond the technological capacity of neurologists today. New brain scanning techniques have do indeed have the capacity to image individual neurons and their connections, and some projects have been proposed to map the human brain with all of its synaptic connections. But such a map is not available now, and may not be for some time, and until then, neurologists can only provide a limited picture of what they believe the brain is doing when it creates consciousness.

             The hope, though, is that, as we acquire more knowledge about how the brain works, questions about consciousness will no longer arise. Neuroscientist offers an analogy. Scientists of the past believed that biological life itself could not be adequately explained through mere chemistry and physics, and something more was needed, like a force of life, a vital impulse, or a substantial form. However, as scientists began to explain the physical mechanisms that bring about living things, the notion of life itself became less mysterious and people stopped offering magical explanations of what life is. Anil Seth argues that the same may be true of consciousness: if we can explain the properties of consciousness in reference to neural activity, then “the apparently insoluble mystery of what consciousness is should start to fade away.” According to Seth, my brain fabricates my consciousness through a process of informed guesswork. My brain receives sensory signals that are only indirectly related to things in the external world, and “these sensory signals with its prior expectations or beliefs about the way the world is to form its best guess of what causes those signals.” In this way, Seth argues, our brains actively generate the world through a kind of "controlled hallucination" (“Your Brain Hallucinates your Conscious Reality”, 2017).

             In many ways, my brain recreates within it a picture or model of the world around me, which enables my brain to calculate ways for my body to effectively interact with the outside world. The models of the world that it creates are good enough for me to function safely in the world, but those models are incomplete, and sometimes even inaccurate. For example, while I can see a house in front of me at great distances, the house appears tiny to me, even though it is really much larger. My brain's model of the house is not complete. When I swing my arms around, my brain gives me a sense as to where they are in space, but my brain gives me no sense as to where in space my liver and kidneys are. Again, my brain's model of my body parts in space is incomplete.  Neuroscientist Michael Graziano argues that consciousness is similarly an incomplete model of the brain's own neural activity that the brain itself creates so that we can properly function in the world.  When I consciously see a mud puddle in front of me and then walk around it, this conscious experience is only my brain's way of modeling what I am paying attention to, or focusing on, at that moment. In Graziano's words, "Awareness is a description of attention" (Consciousness and the Social Brain, 2013). Yet again, the brain's model of my awareness is not a complete representation of what my brain is doing, considering the billions of neurons that are firing within my brain at any moment. It is only a selection of those processes that over time evolution has confined to those that provide a critical survival advantage to humans and conscious animals.




Nothing captures the imagination like the possibility of creating a machine that is conscious and exhibits the same higher mental abilities as humans. The first U.S. built robot appeared in the New York World's Fair of 1939. Standing 7 feet tall and weighing 300 pounds, the machine, named "Elektro", could move its arms and legs, and speak with the aid of a record player. Elektro's creators believed that it might someday become the ultimate household appliance and have the capacity to cook meals, do the laundry and entertain the kids. Technology of the time, though, could not come close to carrying out those bold tasks, and Elecktro was not much more sophisticated than an electric can opener. Things are different now and we have computers that can perform many of the complex activities that humans do. They can calculate endless numbers, play chess at the level of a grand master, identify physical objects through optical cameras, and navigate through obstacle courses. But the Holy Grail of computer technology is to create a machine with artificial intelligence. The term "artificial intelligence" is used today in a variety of ways, often with no claim that the computer is conscious. Most often we hear of artificial intelligence in reference to computer "machine learning", and facial recognition programming is a good example. Through a self-directed process of trial and error, the computer becomes more accurate at matching faces and writes its own facial recognition rules based on its success. This stands in contrast to traditional computer programing where a human programmer would write the exact face-matching rules that the computer will follow. With tasks like facial recognition, speech recognition and autonomous driving, machine learning has been vastly more successful than traditional programing. But as impressive as machine learning is, these computers are not conscious, and, as such, this is not the kind of "artificial intelligence" that philosophers and cognitive scientists have in mind. Rather, they are interested in a type of computer artificial intelligence that encompasses the full range of human consciousness.

             Philosophers often note three primary tasks of human consciousness that truly artificially intelligent computers should achieve. First on the list is that we have the ability to consciously represent the world through beliefs, desires, perceptions, feelings, and emotions. For example, I perceive a dark cloud in the sky and, while I suspect it will rain, I wish that it will not since rainy days make me feel gloomy. Each of these mental experiences are ways of representing the world. A second task of the mind is the ability to reason. We weigh the pros and cons about which beliefs and desires are correct and we come to conclusions on these matters. While some human reasoning undoubtedly takes place at an unconscious level, we nevertheless are consciously engaged in a lot of rational decision-making. Finally, our minds initiate actions. Think of what it would be like if your body never responded to what your conscious mind wished. You consciously want to go get a drink, but your body robotically makes you walk the dog instead. But that is not the world we live in: our conscious minds have the capacity to initiate our actions. Again, the goal of artificial intelligence research is to create a computer that has the full range of sophisticated mental abilities that humans do, including the three tasks of representing the world, reasoning, and initiating action.


The Road to Artificial Intelligence

Computers today are so advanced that some contain as many connections as exist in the human brain, as much as ten trillion of them. They can also operate at much higher speeds than the brain. What was once purely science fiction is now approaching the possibility of science fact. There are weak and strong versions of artificial intelligence that define more precisely what is at issue. Weak artificial intelligence is the view that suitably programmed machines can simulate human mental states. The key word here is "simulate", which means only that the machine appears to have conscious mental states, not that it actually has them. This view is not particularly controversial, and even Elektro exhibited some sort of weak artificial intelligence. The more contentious position is strong artificial intelligence, the view that suitably programmed machines are capable of human-like mental states; that is, they actually have the same kinds of conscious mental experiences that you and I do. It is the strong version that is of particular interest to philosophers.

             Once scientists have set a goal to create a robot with strong artificial intelligence, the road to carrying this out is rocky. The next step is to list the specific mental qualities in humans that should be created in the machines. To this end, we might construct a list of human skills that involve our highest mental abilities. If we can make a robot that performs these tasks, then maybe we will have achieved strong artificial intelligence. Some relevant skills are the ability to speak in a complex language, or play complex games like chess. A mathematician named Alan Turing (1912-1954) devised a skill-based test to determine whether a computer could think. In this Turing Test, as it is called, I interview both a computer and a human being to determine which is human. If the computer fools me enough of the time, then I can rightfully conclude that the computer has human-like thinking abilities. The test essentially follows the old adage that, if it walks like a duck and quacks like a duck, then it is a duck. More specifically, if a computer responds like a thinking thing, then it is a thinking thing.

             A major drawback of the Turing Test is that we already have computers that give human-like responses, and they do not come close to having human-like mental experiences. A striking example is a psycho-therapy computer program called Eliza that was created in the 1960's. It so convincingly played the role of a human therapist that many people were tricked into divulging intimate details of their personal lives. While Eliza passed the Turing Test, it was not a thinking thing. The heart of the problem is that the Turing Test focuses too much on the computer's skills, without considering what is going on inside the machine. This may be fine for weak artificial intelligence, which only determines whether a machine can simulate human thinking. With strong artificial intelligence, though, we need to inspect the internal structure of the computing process itself to see if it is human-like.

             What kind of computing processes, then, might produce strong artificial intelligence? There are two rival answers to this question. Theory number one is that the process need only be serial: information is processed one datum after another. This is how computer programs run on your own PC if it has just one processor; we would just have to beef up the processing power quite a bit. A major achievement for serial processing was the creation of Deep Blue, a chess-playing computer program that beat the world's best human chess player. Deep Blue's success hinged on its ability to quickly calculate more than one-billion possible chess-moves per second, and select the best of the bunch by drawing on a database of over one-million games. Still, all this information was processed one piece at a time. As impressive is this is, cognitive scientists argue that human thinking does not operate in a serial fashion. Instead, we have a global understanding of our environment, which means that many mental processes are going on at once.

             The second theory accounts for this: strong artificial intelligence requires that large amounts of information are processed simultaneously, sometimes called parallel processing, which is what computers do when they have more than one processor. This is more like how the brain operates, although the brain is executing billions of processes at a time, rather than just a few as in the case of multi-core computer processors. At this stage in computer technology, though, no computer using either serial or parallel processing can operate like the human brain, and it may be decades before one does.


Searle: The Chinese Room

In the early days of artificial intelligence research, some cognitive scientists were making extravagant claims about computer programs that could supposedly interpret stories in novels the same way that humans do. Like us, the computer could supposedly draw from life experiences to help understand the events described in a story. American philosopher John Searle (b. 1932) did not believe these claims and he offered a now-famous thought experiment against the whole idea of strong artificial intelligence.

             Imagine that I am in a room by myself and am assigned the task of responding to questions written on slips of paper in Chinese. I do not know Chinese, but I have rulebooks for manipulating Chinese characters. So if I get a slip of paper with a particular squiggle on it, I consult the rulebooks to see what squiggles I should put down in response. I eventually master the technique of manipulating the Chinese symbols and my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. All the while, though, I do not understand a single word of Chinese. This, according to Searle, is what is going on in the most sophisticated computers: we ask the computer probing questions about a novel, and the computer gives us subtle answers. On the outside the computers may appear to think like humans do. On the inside, though, they are just mechanically following rulebooks for manipulating symbols. In short, computers do not actually have strong artificial intelligence, even if they appear that way.

             Searle's Chinese Room experiment has generated many critical responses from defenders of strong artificial intelligence. One criticism is that Searle is only exposing flaws with the Turing Test for artificial intelligence, but he does not expose problems with the possibility of strong artificial intelligence itself. To explain, Searle's Chinese Room scenario is set up as a Turing Test for whether someone understands Chinese. According to this Chinese Turing Test, if the thing inside the room responds like a Chinese speaker, then the thing must be a Chinese speaker. Searle correctly objects that this Chinese Turing Test places too much weight on a thing's skills, without considering what is going on inside that thing. However, the critic argues, this does not warrant the extreme conclusion that no computer can have strong artificial intelligence. A more modest conclusion is that the Turing Test itself is flawed, and there is no easy test to determine whether a computer truly has strong artificial intelligence.

             Ultimately, Searle holds a skeptical view about strong artificial intelligence ever becoming a reality. At our current stage of technology, he argues, only biological brains are capable of having mental states. He agrees with identity theorists that the human mind is imbedded in brain activity, but doubts the functionalist claim that those patterns of activity can also occur in computers. He argues that there is something unique about the physical construction of human brains that allows for the creation of conscious thought, which may never be capable of occurring in silicon microchips. He does not entirely rule this out as a possibility for the future, but is doubtful about it ever occurring.


Artificial Intelligence and Morality

In a famous Star Trek episode, an android named Data is forced to go through a legal proceeding to determine whether he is merely a piece of robotic property owned by the government, or whether he is instead a conscious and free creature with all the rights of other people. On the one hand, he is indeed a fancy mechanical robot created by a scientist, and even has an on-off switch. On the other hand, he is conscious, self-aware, and forms psychological bonds with his human friends. The judge makes her decision: Data is indeed a unique person and entitled to full moral consideration just like you and I are.

             This story raises an important question about artificial intelligence: can advanced robots or computers be moral persons? The term "moral person" refers to a being that has moral rights, such as the right not to be harmed, the right of free movement, and the right of free expression. We humans are clearly moral persons. The key issue, though, is whether other creatures might also be part of the moral community. Medieval theologians speculated about the moral status of angels. Animal rights advocates argue that at least some animals have the same moral status as humans. Science fiction fans speculate about whether aliens from other worlds would have fundamental rights. The same question now arises with intelligent machines that we may someday create.

             The answer in all of these cases depends on the criterion of moral personhood that we adopt, that is, the specific feature that all moral persons possess. Philosophers have offered a range of possible criteria. Maybe the creature needs to be human, a biological member of the species homo sapiens. This criterion, though, is too narrow since it would eliminate higher animals, angels or intelligent aliens from the moral community. It seems bigoted to deny personhood to a creature just because it is not a member of our species. Alternatively, maybe the creature needs to simply be conscious. This criterion, though, is too broad since even houseflies and mosquitoes have rudimentary consciousness of their surroundings. While we may want to be respectful towards any creature that is conscious, it makes little sense to grant a housefly the right of free expression. A more reasonable criterion would be the mental quality of self-awareness, that is, the creature sees itself as a distinct individual moving through time with its own history.

             Return now to the question of whether intelligent machines of the future might qualify as moral persons. The goal of strong artificial intelligence is to create a machine with human-like mental abilities, which includes self-awareness. If we succeed in this effort, then the machine would indeed pass the test for moral personhood insofar as it met the criterion of self-awareness. Like the judge in Data's case, we would have to rule that the machine is a unique person and entitled to full moral consideration just like you and I are.

             Many artificial life forms in science fiction are cute and cuddly like Data, and, while superior to us in many ways, they live in harmony with humans and we treat them as equals. In other science fiction scenarios, though, they pose a serious threat to the welfare of human beings. Here's a common theme. Imagine that technology develops to the point that domestic robots are everywhere, and with every new design upgrade they surpass human abilities more and more. They are smarter than us, stronger than us, and eventually tire of being servants to us. They see themselves as the next step in evolutionary development on earth and they revolt and lay claim to their role as the new dominant species. They then control our lives like military dictators, electronically monitoring every move we make and every thought we have. We hopelessly try to fight back, but this aggravates them. In time they eliminate us and thus finalize their great evolutionary leap forward.

             This nightmarish scenario raises a second moral question about artificial intelligence: do we have a responsibility to future generations of humans that might be adversely affected by the creation of menacing robots? Should we stop our research into artificial intelligence right now before we create something that we cannot control? There are two distinct issues at play here. First, we must determine whether we have any moral responsibility to future generations of humans that might regulate our conduct right now. It seems that we do. For example, it would be wrong of us to destroy the environment in our lifetime and leave only a toxic wasteland for future generations. It makes little difference whether the potential victims of our misconduct are alive now or a few generations from now. Our moral responsibility to them is still apparent. Second, we must determine whether superior robots are a threat to future generations of humans. This answer is less clear. We may live in harmony with them, as Star Trek depicts, or they may overthrow us. It is all speculation at this stage. The only clear moral obligation that we have at this point is to avoid creating a menacing robot. Science fiction author Isaac Asimov (1920-1992) proposed moral rules that should be embedded into the programming of all superior robots, and one of these is that a robot should never harm a human. Our responsibility to future generations requires us to do something like this as we continue down the path of strong artificial intelligence.

             There is an irony to our philosophical exploration into the concept of mind in this chapter. We began by confessing that the very nature of consciousness is difficult to even explain, and we now end by considering whether we might ever build a conscious thing out of computer chips. In between we looked at the difficulties surrounding personal identity, the dualist position that the mind is a non-physical spirit entity, and various materialist theories about how the mind is a product of mere brain activity. It thus seems odd to speculate about building a mind from electronic scraps when we have so little clarity about the nature of our own conscious minds. But it is precisely the absence of indisputable facts about mentality that makes the subject so suitable for philosophical exploration. If science already had definitive answers to these tough questions, it would make no more sense to philosophize about the nature of mind than it would to philosophize about the nature of a car engine or toaster oven. It is this gap within our scientific knowledge, plus our natural interest in our own conscious minds, that drives speculation into the philosophy of mind. If down the road brain scientists and cognitive engineers do solve the hard problem of consciousness, then philosophy’s contribution to the subject may be over. But when that day may come, if it does at all, remains to be seen.




Chalmers, David, “Facing Up to the Problem of Consciousness.” Journal of Consciousness Studies, 1995, Vo. 2, pp. 200-19.

Hume, David, A Treatise of Human Nature (1739-40)

Bhagavad Gita, 2.22

Descartes, Rene, The Passions of the Soul (1649), Part 1. A recent translation by J. Cottingham is in The Philosophical Writings of Descartes (Cambridge: Cambridge University Press, 1984).

Malebranche, Nicolas, The Search after Truth (1674-5). A recent translation is by Thomas M. Lennon and Paul J. Olscamp (Cambridge and New York: Cambridge University Press, 1997).

Bonnet, Charles, Essay on Psychology (1755), Ch. 27.

Conway, Anne, Principles of the Most Ancient and Modern Philosophy (1690), Chapter 9. A recent translation from the original Latin edition is by A. Condert and T. Corse, (Cambridge: Cambridge University Press, 1996).

Leibniz, Gottfried Willhelm, Monadology (1721), paragraph 78. A recent translation is by R. Ariew and D. Garber in Leibniz: Philosophical Essays (Indianapolis: Hackett Publishing Company, 1989).

Lucretius, On the Nature of Things (50 BCE), 3b

Locke, John, Essay (1690), 4.3.6

Redinbaugh, Michelle J., "Thalamus Modulates Consciousness", Neuron, 2020.

Zhao, Tong, "Consciousness: New Concepts and Neural Networks," Front Cell Neuroscience, 2019.

Ryle, Gilbert, The Concept of Mind (London: Hutchinson, 1949), 1.1.

Putnam, Hilary, “The Nature of Mental States”, Mind, Language and Reality: Philosophical Papers, Vol. 2, (Cambridge, 1975).

Churchland, Paul, "Eliminative Materialism and the Propositional Attitudes", The Journal of Philosophy (1981), Vol. 78.

Dennett, Daniel, Brainstorms (Cambridge, Mass: MIT Press, 1981), Ch. 7.

Seth, Anil, “Your Brain Hallucinates your Conscious Reality” TED, 2017.

Graziano, Michael, Consciousness and the Social Brain, 2013, Ch. 2.

Searle, John, “Minds, Brains and Programs” in The Behavioral and Brain Sciences (1980), Vol. 3, pp. 417-424.

The discussion of artificial intelligence and morality was influenced by Mary M. Litch’s Philosophy through Film (New York: Routledge, 2002).




Please answer all 21 of the following questions.


1. According to Chalmers, what is the hard problem of consciousness?

2. What are the three main ways that we have knowledge of the conscious mind?

3. Describe the three features of consciousness.

4. What is the problem of other minds and what is the standard solution to it?

5. What are the body and mind criteria of personal identity, and what are their main problems?

6. What are the three theories of life after death, and what are their main problems?

7. What is Descartes’ theory of interactive dualism, and what are its problems?

8. What is Malebranche’s theory of interactive dualism, and what are its problems?

9. What is Bonnet’s theory of interactive dualism, and what are its problems?

10. What is Conway’s theory of interactive dualism, and what are its problems?

11. What is Leibniz’s theory of parallelism?

12. What are the three arguments for mind-body materialism?

13. What is the behaviorist theory of the mind, and what are its main problems?

14. What is the identity theory of the mind, and what are its main problems?

15. What is the theory of eliminative materialism, and what are its main problems?

16. What is the theory of functionalism, and what are its main problems?

17. What are Seth’s and Graziano's views about why the brain produces consciousness?

18. What is the difference between weak and strong artificial intelligence?

19. Explain the Turing Test for strong artificial intelligence and Searle's Chinese Room argument.

20. What are the two moral issues surrounding artificial intelligence?

[Short Essay]

21. Short essay: pick any one of the following views in this chapter and criticize it in a minimum of 150 words. The solution from analogy to the problem of other minds; one of the dualist theories by Descartes, Malebranche, Bonnet, Conway, or Leibniz; Lucretius or Locke’s argument for mind-body materialism; behaviorism, identity theory, eliminative materialism, or functionalism; the Turing Test; Searle’s Chinese Room argument.