The Laws of Medicine Read online

Page 2


  For Thomas, the astonishing feature of medicine in the 1940s was its ability to use this information to mount genuine therapeutic interventions against diseases based on rational precepts. Once heart failure had been reconceived in terms of pump dysfunction and volume overload (a failing pump cannot move the same volume of blood through the body, and the extra volume froths back into the lungs), then an effective, albeit crude, therapy for heart failure became self-evident: removing a few pints of blood from the veins to ease the straining heart. Similarly, once the miraculous recovery from streptococcal infection had been understood as the deployment of a host immunological response, then this, too, suggested a novel therapeutic approach: transferring serum from a convalescent human or animal to a newly infected patient to supply the crucial defensive factors (later found to be antistreptococcal antibodies) to boost the host’s immunological response. Here is Thomas describing the treatment for streptococcal pneumonia based on this principle: “The serum was injected, very slowly, by vein. When it worked, it worked within an hour or two. Down came the temperature, and the patient, who might have been moribund a few hours earlier, would be sleeping in good health.”

  Thomas wrote, “For an intern it was an opening of a new world. We had been raised to be ready for one kind of profession, and we sensed that the profession itself had changed at the moment of our entry. . . . We became convinced, overnight, that nothing lay beyond the reach of the future. Medicine was off and running.” It was the birth of what Thomas called the “youngest science.”

  ....

  By the time I read The Youngest Science, the scientific transformation of medicine had deepened even further. Take heart failure again. In 1937, Thomas wrote, the only reliable means to affect a failing heart, aside from propping up its function with extra oxygen, was to alter blood volume by inserting a needle into a vein and drawing out a hundred milliliters of fluid from the body. To a cardiologist working in the late 1990s, this would be akin to lancing an abscess using a skin cup: it might work, but it was a decidedly medieval approach. This cardiologist would now have at his disposal not one, or two, but no less than a dozen medicines to subtly modulate the volume, pressure, and rhythm of the failing heart, including diuretics, blood-pressure mediators, drugs that open channels for salt and water in kidneys, or medicines that maintain fine control on heart rhythms. Added to this were implantable defibrillators (colloquially called heart zappers) that delivered jolts of electricity to “reset” the heart should it enter a lethal rhythmic cycle. For the most intractable cases of heart failure—such as the young man whose heart muscles were destroyed, bit by bit, by the mysterious deposition of iron, like the Tin Man of Oz—even more innovative procedures exist, such as the transplantation of a whole foreign heart into the body, followed by a salvo of immunosuppressive medicines to ensure that the transplanted graft remains functional and intact in the body afterward.

  ....

  But the more I read The Youngest Science that year, the more I returned to a fundamental question: Is medicine a science? If, by science, we are referring to the spectacular technological innovations of the past decades, then without doubt medicine qualifies. But technological innovations do not define a science; they merely prove that medicine is scientific—i.e., therapeutic interventions are based on the rational precepts of pathophysiology.

  Sciences have laws—statements of truth based on repeated experimental observations that describe some universal or generalizable attributes of nature. Physics is replete with such laws. Some are powerful and general, such as the law of gravitation, which describes the force of attraction between two bodies with mass anywhere in the universe. Others apply to specific conditions, such as Ohm’s law, which only holds true for certain kinds of electrical circuits. In every case, however, a law distills a relationship between observable phenomena that remains true across multiple circumstances and multiple conditions. Laws are rules that nature must live by.

  There are fewer laws in chemistry. Biology is the most lawless of the three basic sciences: there are few rules to begin with, and even fewer rules that are universal. Living creatures must, of course, obey the fundamental rules of physics and chemistry, but life often exists on the margins and in the interstices of these laws, bending them to their near-breaking limit. Even the elephant cannot violate the laws of thermodynamics—although its trunk, surely, must rank as one of the most peculiar means to move matter using energy.

  But does the “youngest science” have laws? It seems like an odd preoccupation now, but I spent much of my medical residency seeking the laws of medicine. The criteria were simple: a “law” had to distill some universal guiding principle of medicine into a statement of truth. The law could not be borrowed from biology or chemistry; it had to be specific to the practice of medicine. In 1978, in a mordantly acerbic book called The House of God, the writer Samuel Shem had proposed “thirteen laws of medicine” (an example: “Law 12: if the radiology resident and the intern both see a lesion on an X-ray, then the lesion cannot be there”). But the laws that I was seeking were not attempts to skewer medical culture or highlight its perversities à la Shem; I was genuinely interested in rules, or principles, that applied to the practice of medicine at large.

  Of course, these would not be laws like those of physics or chemistry. If medicine is a science at all, it is a much softer science. There is gravity in medicine, although it cannot be captured by Newton’s equations. There is a half-life of grief, even if there is no instrument designed to measure it. The laws of medicine would not be described through equations, constants, or numbers. My search for the laws was not an attempt to codify or reduce the discipline into grand universals. Rather, I imagined them as guiding rules that a young doctor might teach himself as he navigates a profession that seems, at first glance, overwhelmingly unnavigable. The project began lightly—but it eventually produced some of the most serious thinking that I have ever done around the basic tenets of my discipline.

  LAW ONE

  * * *

  A strong intuition is much more powerful than a weak test.

  I discovered the first law of medicine by chance—which is exactly as it should be since it largely concerns chance. In the spring of 2001, toward the end of my internship year, I was asked to see a man with unexplained weight loss and fatigue. He was fifty-six years old, and a resident of Beacon Hill, the tony neighborhood with brick town houses and tree-lined, cobblestone streets that abuts Massachusetts General Hospital.

  Mr. Carlton—as I’ll call him—was the Hill distilled to its essence. With his starched blue shirt, a jacket with elbow patches, and a silk necktie fraying just so, he suggested money, but old money, the kind that can be stuffed under blankets. There was something in his manner—a quicksilver volatility, an irritability—that I could not quite pin down. When he stood up, I noticed that the leather belt around his waist had been cinched tightly. More ominously, the muscles on the side of his forehead had begun to shrivel—a phenomenon called temporal wasting—which clearly suggested the weight loss had been recent and quite severe. He stood up to be weighed and told me that he had lost nearly twenty-six pounds over the last four months. Even the journey from the chair to the scale was like crossing an ocean. He had to sit down again afterward to catch his breath.

  The most obvious culprit was cancer—some occult, hidden malignancy that was driving this severe cachexia. He had no obvious risk factors: he was not a smoker and had no suggestive family history. I ran some preliminary labs on him, but they were largely normal, save for a mild drop in his white-cell count that could be attributed to virtually anything.

  Over the next four weeks, we scoured his body for signs of cancer. CAT scans were negative. A colonoscopy, looking for an occult colon cancer, revealed nothing except for an occasional polyp. He saw a rheumatologist—for the fleeting arthritic pains in his fingers—but again, nothing was diagnosed. I sent out another volley of lab tests. The technician in the blood lab complained that Mr. Carlton’s
veins were so pinched that she could hardly draw any blood.

  For a while nothing happened. It felt like a diagnostic stalemate. More tests came back negative. Mr. Carlton was frustrated; his weight kept dropping, threatening to go all the way down to zero. Then, one evening, returning home from the hospital, I witnessed an event that changed my entire perspective on the case.

  Boston is a small town—and the geography of illness tracks the geography of its neighborhoods (I’ll risk admonishment here, but this is how medical interns think). To the northeast lie the Italian neighborhoods of the North End and the rough-and-tumble shipyards of Charlestown and Dorchester, with high densities of smokers and asbestos-exposed ship workers (think lung cancer, emphysema, asbestosis). To the south are desperately poor neighborhoods overrun by heroin and cocaine. Beacon Hill and Brookline, sitting somewhere in the middle, are firmly middle-class bastions, with the spectra of chronic illnesses that generally affect the middle class.

  What happened that evening amounted to this: around six o’clock as I left the hospital after rounds, I saw Mr. Carlton in the lobby, by the Coffee Exchange, conversing with a man whom I had admitted months ago with a severe skin infection related to a heroin needle inserted incorrectly into a vein. The conversation could not have lasted for more than a few minutes. It may have involved something as innocuous as change for a twenty-dollar bill, or directions to the nearest ATM. But on my way home on the train, the image kept haunting me: the Beacon Hill scion chatting with the Mission Hill addict. There was a dissonant familiarity in their body language that I could not shake off—a violation of geography, of accent, of ancestry, of dress code, of class. By the time I reached my station, I knew the answer. Boston is a small town. It should have been obvious all along: Mr. Carlton was a heroin user. Perhaps the man at the Coffee Exchange was his sometime dealer, or an acquaintance of an acquaintance. In retrospect, I should also have listened to the blood-lab worker who had had such a hard time drawing Mr. Carlton’s blood: his veins were likely scarred from habitual use.

  The next week, I matter-of-factly offered Mr. Carlton an HIV test. I told him nothing of the meeting that I had witnessed. Nor did I ever confirm that he knew the man from Mission Hill. The test was strikingly positive. By the time the requisite viral-load and the CD4 counts had been completed, we had clinched the diagnosis: Mr. Carlton had AIDS.

  ....

  I’m describing this case in such detail because it contains a crucial insight. Every diagnostic challenge in medicine can be imagined as a probability game. This is how you play the game: you assign a probability that a patient’s symptoms can be explained by some pathological dysfunction—heart failure, say, or rheumatoid arthritis—and then you summon evidence to increase or decrease the probability. Every scrap of evidence—a patient’s medical history, a doctor’s instincts, findings from a physical examination, past experiences, rumors, hunches, behaviors, gossip—raises or lowers the probability. Once the probability tips over a certain point, you order a confirmatory test—and then you read the test in the context of the prior probability. My encounter with Mr. Carlton in the lobby of the hospital can be now reconceived as such a probability game. From my perceptual biases, I had assigned Mr. Carlton an infinitesimally low chance of HIV infection. By the end of that fateful evening, though, my corner-of-the-eye encounter had shifted that probability dramatically. The shift was enough to tip the scales, trigger the test, and reveal the ultimate diagnosis.

  But this, you might object, is a strange way to diagnose an illness. What sense does it make to assess the probability of a positive test before a test? Why not go to the test directly? A more thoughtful internist, you might argue, would have screened a patient for HIV right away and converged swiftly on the diagnosis without fumbling along, as I had, for months.

  It is here that an insight enters our discussion—and it might sound peculiar at first: a test can only be interpreted sanely in the context of prior probabilities. It seems like a rule taken from a Groucho Marx handbook: you need to have a glimpse of an answer before you have the glimpse of the answer (nor, for that matter, should you seek to become a member of a club that will accept you as a member).

  To understand the logic behind this paradox, we need to understand that every test in medicine—any test in any field, for that matter—has a false-positive and false-negative rate. In a false positive, a test is positive even when the patient does not have the disease or abnormality (the HIV test reads positive, but you don’t have the virus). In a false negative, a patient tests negative, but actually has the abnormality being screened for (you are infected, but the test is negative).

  The point is this: if patients are screened without any prior knowledge about their risks, then the false-positive or false-negative rates can confound any attempt at diagnosis. Consider the following scenario. Suppose the HIV test has a false-positive rate of 1 in 1,000—i.e., one of out every thousand patients tests positive, even though the patient carries no infection (the actual false-positive rate has decreased since my time as an intern, but remains in this range). And suppose, further, we deploy this test in a population of patients where the prevalence of HIV infection is also 1 in 1,000. To a close approximation, for every infected patient who tests positive, there will also be one uninfected person who will also test positive. For every test that comes back positive, in short, there is only a 50 percent chance that the patient is actually positive. Such a test, we’d all agree, is not particularly useful: it only works half the time. The “more thoughtful internist” in our original scenario gains very little by ordering an HIV test on a man with no risk factors: if the test comes back positive, it is more likely that the test is false, rather than the infection is real. If the false-positive rate rises to 1 percent and the prevalence falls to 0.05 percent—both realistic numbers—then the chance of a positive test’s being real falls to an abysmal 5 percent. The test is now wrong 95 percent of the time.

  In contrast, watch what happens if the same population is preselected, based on risk behaviors or exposures. Suppose our preselection strategy is so accurate that we can stratify patients as “high risk” before the test. Now, the up-front prevalence of infection climbs to 19 in 100, and the situation changes dramatically. For every twenty positive tests, only one is a false positive, and nineteen are true positives—an accuracy rate of 95 percent. It seems like a trick pulled out of a magician’s hat: by merely changing the structure of the tested population, the same test is transformed from perfectly useless to perfectly useful. You need a strong piece of “prior knowledge”—I’ve loosely called it an intuition—to overcome the weakness of a test.

  The “prior knowledge” that I am describing is the kind of thing that old-school doctors do very well, and that new technologies in medicine often neglect. “Prior knowledge” is what is at stake when your doctor—rather than ordering yet another echocardiogram or a stress test—asks you about whether your feet have been swelling or takes your pulse for no apparent reason. I once saw a masterful oncologist examining a patient with lung cancer. The exam proceeded quite predictably. He listened to her heart and lungs. He checked her skin for rashes. He made her walk across the room. And then, as the exam came to a close, he began to ask her a volley of bizarre questions. He fussed about his office, writing his notes, then blurted out a wrong date. She corrected him, laughing. When was the last time she had gone out with her friends? he asked. Had her handwriting changed? Was she wearing an extra pair of socks with her open-toed shoes?

  Once he had finished and she had left the office, I asked him about the questions. The answer was surprisingly simple: he was screening her for depression, anxiety, sleeplessness, sexual dysfunction, neuropathy, and a host of other sequelae of her illness or its treatment. He had refined the process over so many iterations that his questions, seemingly oblique, had been sharpened into needlelike probes. A woman doesn’t know what to say if you ask her if she has “neuropathy,” he told me, but no one can forget putting on an extra
pair of socks. It’s easier to summon a date that you are specifically asked for. Picking out a blurted-out date that’s wrong requires a more subtle combination of attention, memory, and cognition. None of his questions was anywhere near diagnostic or definitive; if there were positive or negative signs, he would certainly need to order confirmatory tests. But he was doing the thing that the most incisive doctors do: he was weighing evidence and making inferences. He was playing with probability.

  This line of reasoning, it’s worthwhile noting, is not a unique feature of any particular test. It applies not only to medicine but to any other discipline that is predicated on predictions: economics or banking, gambling or astrology. The core logic holds true whether you are trying to forecast tomorrow’s weather or seeking to predict rises and falls in the stock market. It is a universal feature of all tests.

  ....

  The man responsible for this strange and illuminating idea was neither a doctor nor a scientist by trade. Born in Hertfordshire in 1702, Thomas Bayes was a clergyman and philosopher who served as the minister at the chapel in Tunbridge Wells, near London. He published only two significant papers in his lifetime—the first, a defense of God, and the second, a defense of Newton’s theory of calculus (it was a sign of the times that in 1732, a clergyman found no cognitive dissonance between these two efforts). His best-known work—on probability theory—was not published during his lifetime and was only rediscovered decades after his death.