The Mind Club: Who Thinks What Feels and Why It Matters Read online

Page 2


  Putting it all together, survey respondents were asked to compare the different characters on different mental abilities. One question asked, for example, if Samantha (the girl) is more or less likely to be able to feel pain than Toby (the chimpanzee). Most people, by the way, said “more.” We calculated average ratings for each mental ability across all characters, and then we used a technique called factor analysis to determine how the mental abilities clumped together. Did people rank all the cryptominds from “least mind” to “most mind” identically for every mental capacity?

  It turns out that mental abilities are not all clumped together. Instead, people see minds in terms of two fundamentally different factors, sets of mental abilities we labeled experience and agency.13

  The experience factor captures the ability to have an inner life, to have feelings and experiences. It includes the capacities for hunger, fear, pain, pleasure, rage, and desire, as well as personality, consciousness, pride, embarrassment, and joy. These facets of mind seemed to capture “what it is like” to have a mind—what psychologists and philosophers often talk about when they discuss the puzzle of consciousness. A mind with experience can feel what it is like to touch a hot stove, can enjoy going to the circus, and can have an orgasm.*

  The agency factor is composed of a different set of mental abilities: self-control, morality, memory, emotion recognition, planning, communication, and thought. The theme for these capacities is not sensing and feeling but rather thinking and doing. The agency factor is made up of the mental abilities that underlie our competence, intelligence, and action. Minds show their agency when they act and accomplish goals.

  A useful framework to help understand the differences between experience and agency is inside and outside. Experience is what minds are like from the inside, what it feels like to be a person, or a cat, or a bat. Because experience is a matter of being inside a mind, it can be very elusive to others. In contrast, agency is more transparent because it is what minds are like from the outside. We can determine an entity’s ability to plan and think simply by observing its actions and reactions. Said another way, experience is about inputs, as sensations and feelings generally are conveyed by sense organs like eyes and ears that feed in to minds. Conversely, agency is about outputs, movements and actions that feed out of minds.

  The two dimensions of experience and agency serve as the north-south and east-west axes of our “map” of cryptominds. Take a moment to look at it—this map is the guide to the rest of the book.

  Figure 2: The Map of Mind Perception

  Mind perception is measured on dimensions of agency and experience.

  Let’s take a quick tour of the map. First, it seems that normal, conscious, living adult humans like us (and executive Todd Billingsley and accountant Sharon Harvey) reside in the upper right corner, with both experience and agency. In contrast, the infant, the dog, and the frog (upper left) have some experience but little agency; they are entities for which people see that “someone is home,” but with diminished capacities to think and act. They are entities that can be harmed but that cannot harm us in return.

  Now continue the tour down the map. Below the animals and baby are the more cryptic cases of the human fetus and the PVS patient. These are cryptominds with very little agency but perhaps some experience; there might be someone home, but not anyone capable of answering the door. Even lower down is the dead woman (lower left), who is ascribed neither much experience nor agency—although it’s worth noting that she is not at zero. Perhaps we see dead people as having some mind because we remember their minds as they were during life. Or perhaps we see their minds because we believe in the afterlife, and how can there be heaven or hell without minds to feel joy and pain?

  As we continue to circle around the map, we move from the dead to the robot, which is perceived to have very little capacity for experience. Just imagine the classic science-fiction robot, which has no emotions and just keeps telling humans how irrational they are. A robot has the agency to help us safely explore the inside of a broken nuclear plant, but without experience it cannot help us explore the inside of a broken heart.

  And then, in the bottom right, we have the Almighty. As you might expect, God is perceived as very able to do things, but is, curiously, seen to have little experience. God’s mind may be great, but we don’t believe that He can feel hunger, fear, or even joy like the rest of us. Interestingly, in a replication of the mind survey, we discovered that corporations are seen to occupy the same location on the mind map. Like God, Google was seen as all agency and no experience.

  So far we have discovered that minds are a matter of perception and that people tend to see them along two dimensions. The bigger question you may be asking is, Why does it matter? As we hinted at earlier, mind perception forms the very basis for questions of life and death: entities with minds deserve moral consideration, whereas entities without minds do not. But if mind perception is split into agency and experience, how do these two factors relate to morality? To answer this question, we ask you to consider a thought experiment called “Baby vs. Robot.”

  It’s not about babies and robots fighting* but instead about two moral scenarios. In the first, imagine that the baby and the robot were just about to tumble off a cliff and you could save only one of them. Which would you save? Likely you would save the baby and let the robot fall to its doom. In the second scenario, imagine that the baby and the robot have found a loaded gun and are playing with it, when it goes off and injures someone. Which of them would you hold responsible? If you’re like most people, you would forgive the baby and condemn the robot to the junkyard.

  These two scenarios reveal that it is no fun being a robot, and also that someone needs to call child services on that baby’s parents. But most important, these scenarios demonstrate that there are two distinct kinds of moral status, not one. Questions of moral responsibility (Who deserves responsibility and punishment?) seem to be distinct from those of moral rights (Who deserves protection from harm?) because we protect the baby from harm and yet hold the robot morally responsible. This finding is striking because many have assumed that more mind equals more “morality,” with human adults having both rights and responsibilities and tables and turnips having neither.

  These two types of moral status (rights and responsibilities) not only are distinct but also map perfectly onto our two-dimensional mind map. To have moral rights you need to have experience, an inner life filled with feelings, and the potential for suffering. Conversely, to be morally responsible you need to have agency, to be able to plan, act, and appreciate the outputs of your thoughts. Babies have more experience than robots and so have more moral rights; robots have more agency than babies and so have more moral responsibility.

  Beyond babies and robots, the mind survey revealed that any entity with experience is seen to deserve moral rights, and any entity with agency is seen to deserve moral responsibility. The baby, the girl, the chimp, and the dog were seen as highest in experience, and survey takers also reported that these entities should be protected from harm. Conversely, adult humans, God, and Google were seen as highest in agency, and survey takers also reported that these entities should be held morally responsible for wrongdoing. Our hearts melt when children scream or puppies whimper—and our bile rises when we see adults (or others who should know better) causing them harm.

  The separation of agency/responsibility from experience/rights in our data was so striking that it seemed that we had discovered a deep fault line in mind perception. Like a border between two very different countries, this fault line—while normally invisible—separates two very different kinds of minds. On one side of the line are minds with agency and moral responsibility, and on the other side of the line are minds with experience and moral rights. Of course, just as there are dual citizens of two countries, some minds—like yours and mine—can live on both sides of this border, with both agency/responsibili
ty and experience/rights. But as we’ll see throughout the book, the most interesting cryptominds are seen to belong to either one side of the line or the other. This separation between two kinds of minds has profound implications for ethical debates. In fact, it is so important that we will now say, in italics and without irony,

  There are two kinds of perceived minds, each with its own type of morality—thinking doers and vulnerable feelers.

  Thinking doers are active minds with moral responsibility that do actions, minds like corporations and God. Vulnerable feelers are passive minds with moral rights that have actions done to them, minds like puppies, medical patients, and babies. This division of doer and feeler should feel intuitive because it is as ancient as human thought. The writings of Confucius,14 the yin and yang of ancient China,15 and the Tao Te Ching16 have long split the world into complementary opposites such as black and white, hot and cold, and good and evil. In mind perception these opposites are inner (experience) and outer (agency), input (experience) and output (agency), passive (experience) and active (agency), recipient (experience) and doer (agency), victim (experience) and aggressor (agency).

  These complementary types of thinking doers and vulnerable feelers are echoed in a similar distinction first introduced by Aristotle in the realm of morality: he divided the moral world into moral agents and moral patients.17 Moral agents are those entities who do (im)moral deeds; they are the performers of good and evil, the heroes and villains, the Gandhis and Hitlers, the cops and robbers. Conversely, moral patients are those entities who receive (im)moral deeds. They are the beneficiaries of good and the victims of evil, the rescued and injured, the adopted and orphaned. Moral agents are thus the thinking doers of the moral world, possessing agency and moral responsibility, whereas moral patients are the vulnerable feelers of the moral world, possessing experience and moral rights (see figure 3).

  Figure 3: Moral Agents and Patients

  There are two kinds of perceived minds.

  Agents and patients crisply divide the moral world into two, but remember that they are complementary opposites: agents act upon patients, and patients are acted upon by agents. Like husbands and wives, agents and patients whisper to each other, “You complete me.” Good and evil usually involve both moral agents and moral patients—they are the heads and the tails of the same moral coin. You can understand “out” only by considering “in,” “doer” by considering “recipient,” and “aggressor” by considering “victim.” To have one, you need the other, which means that good and evil almost always have a dyad of two different minds,* a doing agent and a feeling patient.

  Important examples of evil deeds all fit this dyadic template, including murder (murderer + deceased), theft (thief + victim), and child abuse (abuser + child). Even good deeds like a rescue have both a rescuer and a person in need. In contrast to these dyadic deeds, consider whether it is evil to cut off your own ear. Pulling a Vincent van Gogh might be gross, but it isn’t wrong in the same way as cutting off someone else’s ear, because self-injury doesn’t have two different minds for agent and patient.

  Figure 4: The Winged Demon

  Immorality or evil (represented by the winged demon) emerges through the combination of both thinking agent (left) and vulnerable patient (right).

  Some debate whether there are truly victimless (or agentless) wrongs, but we can agree that acts with obvious agents and patients are prototypically immoral. This dyadic nature of morality is illustrated in figure 4—when the malicious intention of a thinking agent is combined with the suffering of a vulnerable patient, you get the demon of immorality. This dyadic template is consistent with both psychological data on moral judgment18 and long-standing legal theory, which suggests that guilty verdicts require both someone having been harmed (the guilty act of actus reus) and someone having knowingly perpetrated the harm (the guilty mind of mens rea).19

  Seen from the perspective of mind perception, good and evil aren’t mystical forces that exist apart from humanity, but simply what emerges through the interaction of agents and patients. To create evil, just intentionally cause another mind to suffer (e.g., kick a dog), and to create good, just intentionally prevent another mind from suffering (e.g., stop a dog from being kicked). More formally, we can define both moral and immoral acts thus:

  (Im)morality = Agency (of Agent) + Experience (of Patient)

  This definition reflects both the dyadic nature of good and evil and the kinds of mind (e.g., adult, child, animal, machine) in the agent and patient slots. To get maximum immorality, you should combine a very powerful agent and a very vulnerable patient. Conversely, for minimum immorality, you should combine a weak agent and an invulnerable patient. As evidence, consider a thought experiment that actually does involve fighting, “CEO vs. Little Girl!”

  Imagine that a CEO punches a little girl in the face. Chances are you’d think this is immoral. Now imagine that a little girl punches a CEO in the face. Chances are you’d think this is funny. Indeed, children injuring adults is a staple of hilarious YouTube videos, whereas adults injuring children is a staple of (decidedly unfunny) Lifetime movies.

  The evil of CEOs harming kids, but not of kids harming CEOs, is perfectly consistent with our formula; CEOs are mostly thinking doers, whereas kids are mostly vulnerable feelers, and so only one combination trips our evil detector. Linking mind perception to morality not only explains the enduring hilarity of kids injuring unsuspecting adults but also allows you to predict your moral outrage about almost any infraction. Tough man (high agency) punches kitten (high experience)? Immoral. Kitten (low agency) scratches tough man (low experience)? Not immoral. When you become enraged at an instance of moral depravity, chances are someone very agentic is harming someone (or something) very patientic. This idea of dyadic morality will be important throughout the book as we explore our different reactions to cryptominds causing and receiving harm, such as child-killing pigs and medical patients left to die.

  Well, it’s only the first chapter, and already we’ve accused your mother of being a zombie, explored the structure of mind perception, and revealed how mind perception defines good and evil. In the chapters to come, we will revisit many of these themes as we explore the neighborhoods of the mind map. We’ll discover the best way to escape blame (chapter 2), why natural disasters increase belief in God (chapter 9), why vegetative patients are seen as more dead than the dead (chapter 6), why conspiracy theories are irresistible (chapter 7), and how good and evil can make people physically more powerful (chapter 4).

  We’ll start in the next chapter with the mind of the Animal, the prototypical cryptomind. Nonhuman animals may not be able to talk (at least not well), but they do provide lots of physical cues to mind—movements and expressions and apparent goals and joys and sorrows. Animal minds are an important puzzle, one that shaped the history of psychology and that continues to bother everyone who ever ate something that could once look back at them.

  Our next stop will be the mind of the Machine. No one is tempted to see a mind in a pair of scissors, but we do glimpse mind when machines become complex and marvelous. Machine minds—like Jeopardy-winning computers—seem mostly to have agency and not experience, but new technology is leading to humanlike machines that also appear to feel emotions. We’ll see, however, that such experiencing minds are not universally loved and can be downright uncanny.

  The Patient investigates how we see someone who is sick or in treatment or in trouble. We typically see these minds through the lens of empathy, appreciating what it is like for them to feel pain, pleasure, or other feelings. Although empathy can make us attuned to experience, it can also blind us to agency. The analysis of patients also highlights the special world of medicine—those professionals who work with patients and who can sometimes lose sight of their minds.

  Some minds are cryptic because of our feelings about them. From the person who stole our parking spac
e to the trusted friend who slept with our spouse, we have trouble seeing the mind of the Enemy. The hate and fear we feel toward enemies blinds us to their agency and experience through the process of dehumanization—turning people into objects. In war, dehumanization may help a soldier do his job, but in everyday life it can make us callous and insensitive to the suffering of others.

  Some of the most enigmatic minds we encounter are the Silent—humans who can’t communicate. The silent include those in vegetative states and those who are locked in—with thoughts but no capacity to move—both of which bring up important ethical issues. Brain-scanning methods offer new hope for perceiving these cryptominds, but meanwhile families and caregivers try desperately to do mind reading of their own and sometimes mistake their own thoughts for those of the silent.

  Then there is the mind of the Group. A group, such as a corporation, doesn’t have a single brain, but this doesn’t us stop us from talking about it as if it has a mind (e.g., “Google challenges Microsoft’s lead”). Groups typically seem to have more agency than experience, which means that people in groups can do evil but not suffer it. These perceptions suggest why groups are often implicated in paranoia about conspiracies.

  And do the Dead have minds? About four out of five Americans believe in life after death. Not nearly as many believe they can communicate with the dead, but believing in an afterlife means that many must also believe in an aftermind. Gathered at the graveside with others who knew the dearly departed, we may perceive the mind of the dead in almost the same way we perceive the mind of the living. The reasons for this perception are many, including our wishes for immortality and dualism, the apparent disconnect between mind and body.