Atheist Mind, Humanist Heart Read online

Page 5


  To generalize a little more: to determine how much credibility we can attribute to things we’ve personally observed, compared to those observed by others, we can simply assign greater credibility to the source that has historically been more accurate in predicting future events. If Caroline sees that her own inductive conclusions are usually less accurate than those she learns from others, she can decide to lend more weight to their predictions than her own.

  This leads to a surprising conclusion. Although one of the assumptions (the first non-commandment) is that there is an external reality or truth, how much each individual believes in a particular aspect of that reality isn’t truly objective—it’s based on the experiences that he or she has had in life. While the moon’s existence is certainly regarded as a fact, it would seem appropriate for a blind person to have less confidence in the moon’s existence than in the existence of sounds.

  Let’s take a less extreme example. You might be more likely to believe that smoking can be bad for a person’s health if you have personally witnessed the decline in the health of your chain-smoking grandfather. It also follows that your beliefs can and should change over time as you experience different things or come across compelling data that contradict your existing beliefs.

  The Use of Science

  The concept that our experiences and perspectives are central to how we form beliefs about the world is called perspectival realism. It says that an observer can never be completely objective since this would require a view external to any one perspective—or what philosopher Thomas Nagel famously called “the view from nowhere.”6 Given the fallibility of our senses, the best view we can manage is the view from multiple sources and perspectives. Then we can compare those different models of reality and combine different pieces of reliable perspective to form a more holistic and accurate view of reality. You may recognize this as the very method we discussed in chapter 2, quoting philosopher Bertrand Russell. He decided that even if reports vary regarding the color and size of a table, the fact that everyone sees a table gives him enough confidence in its existence to set a casserole on it.

  We’ll never see the exact truth about any aspect of the world, because we are limited by our perspectives. However, our observations and inductive reasoning can still lead us to good approximations of the truth. In order to describe the way we form our conceptions of reality, philosopher Ron Giere suggested that human beings intrinsically create a mental model or “map” of reality.7 A good map doesn’t claim a one-to-one relationship between itself and reality. A map always has less detail than is available in the real world. But the stronger the correspondence between the map and reality, the better we consider the map. Trying on several perspectives improves the detail of our mental maps of reality.

  How relevant is the scientific use of laws and equations to describe the fact that the sun will rise tomorrow? Inductive reasoning is the foundation of the scientific method. So is careful use of observational data (our senses) and use of language and mathematics (definitional truths). The scientific method is a way to gain knowledge by making observations and doing experiments to validate that knowledge. It can be summarized as a process with five steps:8

  Make observations about the world.

  Construct a hypothesis that could explain the observations.

  Test the hypothesis through observable, empirical, and measurable experiments.

  Analyze the data, and draw a conclusion on the validity or invalidity of the hypothesis.

  Communicate the data and results with others for criticism or concurrence.

  This method is both simple and powerful. Suppose you grow tomatoes in your backyard, and you’ve noticed that some vines grow bigger, better tomatoes than others. There’s the observation. You guess that the plants getting more water are doing better. There’s your hypothesis. You test this by carefully watering each vine in the row with a different, premeasured amount of water every day for a full season. At the end of the season, you record the number and sizes of the tomatoes and learn, to your surprise, that the tomatoes at the end of the row, the ones that received the least water each day, actually did the best.

  But before you start watering all of your tomatoes with an eyedropper, you take that crucial fifth step—sharing your data with others who have a more objective view. That’s when someone notices a flaw in your study design: the tomatoes you watered least were also at the downhill end of the row. They weren’t getting much water from you. But, thanks to gravity, they were getting a daily sip from their uphill neighbors.

  So it’s back to the drawing board to test with level rows. That’s how science works.

  The scientific method is a great ally to the current framework of belief because it systematizes the process for inductive reasoning. It allows us to be confident in millions of discoveries and observations made with the use of that method. The discovery of bacteria and the dawn of modern health care, the discovery of the role of DNA in reproduction and genetic traits, the understanding of electricity, and the advent of the microchip and personal computer—these are just a few examples of human achievement made possible by this careful, systematic, peer-reviewed means of interrogating the world.

  But for all its power and achievement, the question remains: is the scientific method the right tool for predicting whether the sun will rise tomorrow?

  The answer is a definite yes. The scientific description of the orbit of the earth around the sun is just a much better attempt at describing the situation than our own inductive conclusions. And, unlike our intuition that the sun goes around the earth, the scientific description has the advantage of being demonstrably correct. Science tells us how and when the sun will rise tomorrow and enables us to predict what time it will be in a particular location in the sky.

  Even so, science is not very good at telling us why the sun will rise. Gravity has been well described, but the exact reason that objects with mass attract each other is still a bit of a puzzle. There are theories, but without corroborating observations and consensus among scientists, it’s reasonable to place much less confidence in the explanations provided by these theories than in the simple description of the motion of celestial bodies that has been tested and validated now for four hundred years.

  It’s also worth mentioning that conventional science sometimes turns out to be not strictly wrong but incomplete. That was the case when Newtonian gravity was amended by Einstein’s general relativity. That’s why the strength of any belief should be proportional to the evidence and to the knowledge and intellectual abilities of those people articulating that belief. Beliefs can and should be reevaluated as new evidence or information is available. Newton put forth the simplest explanation that was consistent with the data he had observed at the time. New data became available later, so a more complete description was necessary. Beliefs that change or become discredited do not invalidate the scientific method or inductive reasoning. They capture the fact that we don’t always have perfect information, so a probabilistic method is the most appropriate way to capture the uncertainty inherent in any belief.

  While science is generally a reliable way to know about the world, the people practicing science—scientists—are also fallible human beings. They make errors and are plagued by the same desires for money, power, and influence as others. These influences sometimes taint the quality of their research. Consider, for example, the researcher from the University of Vermont College of Medicine who falsified data about therapies for menopause in order to receive several federal research grants totaling $2.9 million.9 Having confidence in the scientific method doesn’t mean that scientific research and data are beyond reproach. In fact, one of the great strengths of the method is that fifth step, the requirement to communicate findings and invite criticism by those outside of a given scientist’s biases. Scientific careers are made by invalidating findings just as they are made by validating them. The collective desire for
accuracy serves as a powerful check on the individual desire to be right. That’s how the researcher’s errant data were uncovered and corrected. As a result, he was prosecuted and sentenced to one year of jail time in federal prison. By cross-validating results, challenging results, and debating competing theories, the best science emerges over time.

  Collating Data

  Where does understanding or intelligence fit into the picture? It is possible to think of examples where inductive reasoning would seem to lead to false conclusions. For example, let’s go back to the sun. Instead of Caroline the Californian, let’s turn to Albert the Alaskan. He’s seen a lot of sunrises too, of course. But in Barrow, Alaska, Albert’s hometown, the tilt of the earth means that in winter the sun doesn’t rise every day. Does inductive reasoning mean that because Albert has seen the sun rise for so many days, he should always expect it to rise on another given day? Of course not. In this case, his understanding of physics and his awareness that the sun doesn’t rise at all in northern Alaska in midwinter will trump the simple inductive conclusion that the sun rises every day.

  But that example doesn’t mean we have to throw inductive reasoning away in favor of some completely new way of thinking. Albert’s improved understanding is just the result of using other observations and inductive reasoning, combining and weighting them to form beliefs that become more complex than simple observational data. It’s just inductive reasoning played forward.

  Growing up in Barrow, it might have been reasonable at first for little boy Albert to believe the sun would rise every day. But as he grew, he would have started putting the pieces together. He might have noticed that the days got dramatically shorter in the months before the sun disappeared entirely. These observations would lead him to form other inductive beliefs that could recognize patterns in when the sun would indeed rise and how long it would shine each day. As he went through high school, he would have learned how physics and astronomy predict the motion of the earth around the sun and the times and durations of sun exposure around the world. His belief in the validity of the concepts taught to him at school would have been solidified by other inductive observations about the soundness of the information his science teachers taught him. In the end, an “understanding” of why the sun rises or doesn’t rise could override Albert’s simplistic direct observations. This wouldn’t contradict his inductive reasoning ability—it would be the product of multiple steps of inductive reasoning.

  The Fourth Non-commandment

  From our three core assumptions, we were able to gain confidence in inductive reasoning. With induction, we can generate many other beliefs based on observations and experiences in life. The scientific method is an effective tool for discerning facts and, just as importantly, for learning what previous knowledge is no longer accurate. A critical part of that method is the control of our biases in the service of an accurate view of reality. Certain beliefs can be held more strongly than others. The basis for deciding how strongly to hold a belief should be the evidence encountered along the way, as well as the odds that the evidence is correct. Conflicting beliefs can be resolved by deciding which belief has the greater likelihood of being true, based on the evidence.

  In summary, we’ve formulated a belief that inductive reasoning and the scientific method are tools for establishing the truth. The key principle in both is the notion that the strength of a belief is tied to the evidence in its support.

  Let’s see where we are when we add this conclusion to our Ten Non-commandments:

  I.

  The world is real, and our desire to understand the world is the basis for belief.

  II.

  We can perceive the world only through our human senses.

  III.

  We use rational thought and language as tools for understanding the world.

  IV.

  All truth is proportional to the evidence.

  4

  Beliefs about the Unknown

  Make everything as simple as possible, but not simpler.

  —Albert Einstein

  While inductive reasoning takes advantage of observations and evidence, it’s sometimes necessary to form beliefs when observation and evidence are not available. “Is there life after death?” and “Do we have a soul?” are just two examples. How can we approach questions like these?

  When experience is limited, a process such as favoring simplicity (Ockham’s razor) becomes especially powerful. It lets us generalize the belief that the explanation of the facts requiring the fewest assumptions is probably the right one. Put another way, if you have two possible explanations, and they both explain the data, the simpler one is usually right.

  So let’s put Ockham’s razor to work on beliefs that cannot be underpinned by observation and evidence.

  That middle phrase above—“and they both explain the data”—is key. Favoring simplicity doesn’t just mean “simpler is better.” To be in the running at all, an explanation has to fit all of the facts. Simpler is better only if the simpler explanation also completely works. Favoring simplicity suggests that we should tend toward simpler explanations until we can trade some simplicity for the increased explanatory power of the facts. Isaac Newton captured this well when he said, “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.”1

  Since this principle of favoring simplicity is not itself a core assumption, just a guide in formulating the assumptions, we still have one major obstacle to overcome. This principle needs to be logically deduced so that the overall framework remains coherent. The good news is that favoring simplicity—also known as the “principle of parsimony”—can be validated not only through personal experiences, but also by figuring out probabilities for alternate theories.

  The simplest kind of theory includes just one principle, while more complex theories rely on more than one. Since each principle attached to a theory is a form of belief, each belief includes a probability that the belief is accurate. The more you multiply probabilities in a single function, the smaller the probability of the function becomes. In other words, assuming all things being the same between two different theories, the more principles on which a theory relies, the lower the probability that it will be true.

  Suppose we’re all sitting, bored, on a small-town street corner one afternoon and decide to pass the time by betting on the next car to go by. If one of us said, “I’ll bet you five bucks that the next car that goes by will be white,” you might not take that bet. You know that white is a very popular car color, so you could very well end up losing five bucks.

  If one of us said, “I’ll bet the next car that goes by will be a Ford,” you’re probably not going to take that bet either. Half the cars in small-town America are Fords. But if one of us said, “I’ll bet the next car that goes by will be a white 1969 Ford Mustang convertible,” you’ll most likely jump on that bet in a heartbeat. Each one of those five principles (white, 1969, Ford, Mustang, convertible) lowers the probability of the next car being a match. Put them all together, and you’re about to be five dollars richer.

  To see why, let’s take a look at how the probabilities stack up. Let’s say we assume (generously) that each one of these “principles” is equally likely, and each has a 50 percent likelihood of being true. So the odds of the following events are each 50 percent probable: the next car is white; it’s from the 1969 model year; it’s a Ford; it’s a Mustang; and it’s a convertible.

  These are insanely generous odds, of course—for example, the assumption that half the cars on the road are from 1969. To calculate the overall likelihood of winning the bet, we multiply the individual probabilities of each event together. We multiply the chance that the car is white (50 percent) b
y the chances it’s a 1969 (50 percent) and a Ford (50 percent) and a Mustang (50 percent) and a convertible (50 percent) . . . which yields a combined probability of just 3.125 percent (50 percent × 50 percent × 50 percent × 50 percent × 50 percent). So even in this dream world saturated with classic Mustangs, the chance of winning this bet is still incredibly small.

  The above example shows that increasing the number of variables in an explanation reduces the overall likelihood of the explanation being correct, even if the variables are all equally likely.

  Variables aren’t always equally likely, of course, and so the probabilities should be factored in. If you have an explanation with three very probable principles, it’s more likely to be true than one with two improbable principles—and we should always favor the explanation that’s most probable. But all other things being equal, increasing complexity should reduce our confidence. Simpler is better, so long as it fully describes the facts.

  All this suggests that favoring simplicity isn’t a law or a foolproof principle. Rather, it’s a tool for deciding between competing ideas when there’s not much helpful evidence. As you’ll see, the questions we’ll be analyzing in this chapter are of exactly that kind.

  In the White Room

  Let’s start with a thought experiment. A woman in a white lab coat is about to ask you a series of questions about what exists or doesn’t exist, and you are required to answer them. For every right answer, she will give you a hundred dollars. You pride yourself on being a rational thinker who analyzes the reasons for your choices, so you’re already thinking about the new iPad you plan to buy with the money.