The Gene Read online

Page 2


  I read two of these studies on a winter morning on the subway in New York, a few months after returning from Calcutta. Across the aisle, a man in a gray fur hat was pinning down his son to put a gray fur hat on him. At Fifty-Ninth Street, a mother wheeled in a stroller with twins emitting, it seemed to my ears, identically pitched screams.

  The study provided a strange interior solace—answering some of the questions that had so haunted my father and grandmother. But it also provoked a volley of new questions: If Moni’s illness was genetic, then why had his father and sister been spared? What “triggers” had unveiled these predispositions? How much of Jagu’s or Moni’s illnesses arose from “nature” (i.e., genes that predisposed to mental illness) versus “nurture” (environmental triggers such as upheaval, discord, and trauma)? Might my father carry the susceptibility? Was I a carrier as well? What if I could know the precise nature of this genetic flaw? Would I test myself, or my two daughters? Would I inform them of the results? What if only one of them turned out to carry that mark?

  While my family’s history of mental illness was cutting through my consciousness like a red line, my scientific work as a cancer biologist was also converging on the normalcy and abnormalcy of genes. Cancer, perhaps, is an ultimate perversion of genetics—a genome that becomes pathologically obsessed with replicating itself. The genome-as-self-replicating-machine co-opts the physiology of a cell, resulting in a shape-shifting illness that, despite significant advances, still defies our ability to treat or cure it.

  But to study cancer, I realized, is to also study its obverse. What is the code of normalcy before it becomes corrupted by cancer’s coda? What does the normal genome do? How does it maintain the constancy that makes us discernibly similar, and the variation that makes us discernibly different? How, for that matter, is constancy versus variation, or normalcy versus abnormalcy, defined or written into the genome?

  And what if we learned to change our genetic code intentionally? If such technologies were available, who would control them, and who would ensure their safety? Who would be the masters, and who the victims, of this technology? How would the acquisition and control of this knowledge—and its inevitable invasion of our private and public lives—alter the way we imagine our societies, our children, and ourselves?

  This book is the story of the birth, growth, and future of one of the most powerful and dangerous ideas in the history of science: the “gene,” the fundamental unit of heredity, and the basic unit of all biological information.

  I use that last adjective—dangerous—with full cognizance. Three profoundly destabilizing scientific ideas ricochet through the twentieth century, trisecting it into three unequal parts: the atom, the byte, the gene. Each is foreshadowed by an earlier century, but dazzles into full prominence in the twentieth. Each begins its life as a rather abstract scientific concept, but grows to invade multiple human discourses—thereby transforming culture, society, politics, and language. But the most crucial parallel between the three ideas, by far, is conceptual: each represents the irreducible unit—the building block, the basic organizational unit—of a larger whole: the atom, of matter; the byte (or “bit”), of digitized information; the gene, of heredity and biological information.I

  Why does this property—being the least divisible unit of a larger form—imbue these particular ideas with such potency and force? The simple answer is that matter, information, and biology are inherently hierarchically organized: understanding that smallest part is crucial to understanding the whole. When the poet Wallace Stevens writes, “In the sum of the parts, there are only the parts,” he is referring to the deep structural mystery that runs through language: you can only decipher the meaning of a sentence by deciphering every individual word—yet a sentence carries more meaning than any of the individual words. And so it is with genes. An organism is much more than its genes, of course, but to understand an organism, you must first understand its genes. When the Dutch biologist Hugo de Vries encountered the concept of the gene in the 1890s, he quickly intuited that the idea would reorganize our understanding of the natural world. “The whole organic world is the result of innumerable different combinations and permutations of relatively few factors. . . . Just as physics and chemistry go back to molecules and atoms, the biological sciences have to penetrate these units [genes] in order to explain . . . the phenomena of the living world.”

  The atom, the byte, and the gene provide fundamentally new scientific and technological understandings of their respective systems. You cannot explain the behavior of matter—why gold gleams; why hydrogen combusts with oxygen—without invoking the atomic nature of matter. Nor can you understand the complexities of computing—the nature of algorithms, or the storage or corruption of data—without comprehending the structural anatomy of digitized information. “Alchemy could not become chemistry until its fundamental units were discovered,” a nineteenth-century scientist wrote. By the same token, as I argue in this book, it is impossible to understand organismal and cellular biology or evolution—or human pathology, behavior, temperament, illness, race, and identity or fate—without first reckoning with the concept of the gene.

  There is a second issue at stake here. Understanding atomic science was a necessary precursor to manipulating matter (and, via the manipulation of matter, to the invention of the atomic bomb). Our understanding of genes has allowed us to manipulate organisms with unparalleled dexterity and power. The actual nature of the genetic code, it turns out, is astoundingly simple: there’s just one molecule that carries our hereditary information and just one code. “That the fundamental aspects of heredity should have turned out to be so extraordinarily simple supports us in the hope that nature may, after all, be entirely approachable,” Thomas Morgan, the influential geneticist, wrote. “Her much-advertised inscrutability has once more been found to be an illusion.”

  Our understanding of genes has reached such a level of sophistication and depth that we are no longer studying and altering genes in test tubes, but in their native context in human cells. Genes reside on chromosomes—long, filamentous structures buried within cells that contain tens of thousands of genes linked together in chains.II Humans have forty-six such chromosomes in total—twenty-three from one parent and twenty-three from another. The entire set of genetic instructions carried by an organism is termed a genome (think of the genome as the encyclopedia of all genes, with footnotes, annotations, instructions, and references). The human genome contains about between twenty-one and twenty-three thousand genes that provide the master instructions to build, repair, and maintain humans. Over the last two decades, genetic technologies have advanced so rapidly that we can decipher how several of these genes operate in time and space to enable these complex functions. And we can, on occasion, deliberately alter some of these genes to change their functions, thereby resulting in altered human states, altered physiologies, and changed beings.

  This transition—from explanation to manipulation—is precisely what makes the field of genetics resonate far beyond the realms of science. It is one thing to try to understand how genes influence human identity or sexuality or temperament. It is quite another thing to imagine altering identity or sexuality or behavior by altering genes. The former thought might preoccupy professors in departments of psychology, and their colleagues in the neighboring departments of neuroscience. The latter thought, inflected with both promise and peril, should concern us all.

  As I write this, organisms endowed with genomes are learning to change the heritable features of organisms endowed with genomes. I mean the following: in just the last four years—between 2012 and 2016—we have invented technologies that allow us to change human genomes intentionally and permanently (although the safety and fidelity of these “genomic engineering” technologies still need to be carefully evaluated). At the same time, the capacity to predict the future fate of an individual from his or her genome has advanced dramatically (although the true predictive capacities of these technologies still rema
in unknown). We can now “read” human genomes, and we can “write” human genomes in a manner inconceivable just three or four years ago.

  It hardly requires an advanced degree in molecular biology, philosophy, or history to note that the convergence of these two events is like a headlong sprint into an abyss. Once we can understand the nature of fate encoded by individual genomes (even if we can predict this in likelihoods rather than in certainties) and once we acquire the technology to intentionally change these likelihoods (even if these technologies are inefficient and cumbersome) our future is fundamentally changed. George Orwell once wrote that whenever a critic uses the word human, he usually renders it meaningless. I doubt that I am overstating the case here: our capacity to understand and manipulate human genomes alters our conception of what it means to be “human.”

  The atom provides an organizing principle for modern physics—and it tantalizes us with the prospect of controlling matter and energy. The gene provides an organizing principle for modern biology—and it tantalizes us with the prospect of controlling our bodies and fates. Embedded in the history of the gene is “the quest for eternal youth, the Faustian myth of abrupt reversal of fortune, and our own century’s flirtation with the perfectibility of man.” Embedded, equally, is the desire to decipher our manual of instructions. That is what is at the center of this story.

  This book is organized both chronologically and thematically. The overall arc is historical. We begin in Mendel’s pea-flower garden, in an obscure Moravian monastery in 1864, where the “gene” is discovered and then quickly forgotten (the word gene only appears decades later). The story intersects with Darwin’s theory of evolution. The gene entrances English and American reformers, who hope to manipulate human genetics to accelerate human evolution and emancipation. That idea escalates to its macabre zenith in Nazi Germany in the 1940s, where human eugenics is used to justify grotesque experiments, culminating in confinement, sterilization, euthanasia, and mass murder.

  A chain of post–World War II discoveries launches a revolution in biology. DNA is identified as the source of genetic information. The “action” of a gene is described in mechanistic terms: genes encode chemical messages to build proteins that ultimately enable form and function. James Watson, Francis Crick, Maurice Wilkins, and Rosalind Franklin solve the three-dimensional structure of DNA, producing the iconic image of the double helix. The three-letter genetic code is deciphered.

  Two technologies transform genetics in the 1970s: gene sequencing and gene cloning—the “reading” and “writing” of genes (the phrase gene cloning encompasses the gamut of techniques used to extract genes from organisms, manipulate them in test tubes, create gene hybrids, and produce millions of copies of such hybrids in living cells.) In the 1980s, human geneticists begin to use these techniques to map and identify genes linked to diseases, such as Huntington’s disease and cystic fibrosis. The identification of these disease-linked genes augurs a new era of genetic management, enabling parents to screen fetuses, and potentially abort them if they carry deleterious mutations (any person who has tested their unborn child for Down syndrome, cystic fibrosis, or Tay-Sachs disease, or has been tested herself for, say, BRCA1 or BRCA2 has already entered this era of genetic diagnosis, management, and optimization. This is not a story of our distant future; it is already embedded in our present).

  Multiple genetic mutations are identified in human cancers, leading to a deeper genetic understanding of that disease. These efforts reach their crescendo in the Human Genome Project, an international project to map and sequence the entire human genome. The draft sequence of the human genome is published in 2001. The genome project, in turn, inspires attempts to understand human variation and “normal” behavior in terms of genes.

  The gene, meanwhile, invades discourses concerning race, racial discrimination, and “racial intelligence,” and provides startling answers to some of the most potent questions coursing through our political and cultural realms. It reorganizes our understanding of sexuality, identity, and choice, thus piercing the center of some of the most urgent questions coursing through our personal realms.III

  There are stories within each of these stories, but this book is also a very personal story—an intimate history. The weight of heredity is not an abstraction for me. Rajesh and Jagu are dead. Moni is confined to a mental institution in Calcutta. But their lives and deaths have had a greater impact on my thinking as a scientist, scholar, historian, physician, son, and father than I could possibly have envisioned. Scarcely a day passes in my adult life when I do not think about inheritance and family.

  Most important, I owe a debt to my grandmother. She did not—she could not—outlive the grief of her inheritance, but she embraced and defended the most fragile of her children from the will of the strong. She weathered the buffets of history with resilience—but she weathered the buffets of heredity with something more than resilience: a grace that we, as her descendants, can only hope to emulate. It is to her that this book is dedicated.

  * * *

  I. By byte I am referring to a rather complex idea—not only to the familiar byte of computer architecture, but also to a more general and mysterious notion that all complex information in the natural world can be described or encoded as a summation of discrete parts, containing no more than an “on” and “off” state. A more thorough description of this idea, and its impact on natural sciences and philosophy, might be found in Information: A History, a Theory, a Flood by James Gleick. This theory was most evocatively proposed by the physicist John Wheeler in the 1990s: “Every particle, every field of force, even the space-time continuum itself—derives its function, its meaning, its very existence entirely . . . from answers to yes-or-no questions, binary choices, bits . . . ; in short, that all things physical are information-theoretic in origin.” The byte or bit is a man-made invention, but the theory of digitized information that underlies it is a beautiful natural law.

  II. In certain bacteria, chromosomes can be circular.

  III. Some topics, such as genetically modified organisms (GMOs), the future of gene patents, the use of genes for drug discovery or biosynthesis, and the creation of new genetic species merit books in their own right, and lie outside the purview of this volume.

  PART ONE

  * * *

  THE “MISSING SCIENCE OF HEREDITY”

  The Discovery and Rediscovery of Genes

  (1865–1935)

  This missing science of heredity, this unworked mine of knowledge on the borderland of biology and anthropology, which for all practical purposes is as unworked now as it was in the days of Plato, is, in simple truth, ten times more important to humanity than all the chemistry and physics, all the technical and industrial science that ever has been or ever will be discovered.

  —Herbert G. Wells, Mankind in the Making

  JACK: Yes, but you said yourself that a severe chill was not hereditary.

  ALGERNON: It usen’t to be, I know—but I daresay it is now. Science is always making wonderful improvements in things.

  —Oscar Wilde, The Importance of Being Earnest

  The Walled Garden

  The students of heredity, especially, understand all of their subject except their subject. They were, I suppose, bred and born in that brier-patch, and have really explored it without coming to the end of it. That is, they have studied everything but the question of what they are studying.

  —G. K. Chesterton, Eugenics and Other Evils

  Ask the plants of the earth, and they will teach you.

  —Job 12:8

  The monastery was originally a nunnery. The monks of Saint Augustine’s Order had once lived—as they often liked to grouse—in more lavish circumstances in the ample rooms of a large stone abbey on the top of a hill in the heart of the medieval city of Brno (Brno in Czech, Brünn in German). The city had grown around them over four centuries, cascading down the slopes and then sprawling out over the flat landscape of farms and meadowlands below. But the friars
had fallen out of favor with Emperor Joseph II in 1783. The midtown real estate was far too valuable to house them, the emperor had decreed bluntly—and the monks were packed off to a crumbling structure at the bottom of the hill in Old Brno, the ignominy of the relocation compounded by the fact that they had been assigned to live in quarters originally designed for women. The halls had the vague animal smell of damp mortar, and the grounds were overgrown with grass, bramble, and weeds. The only perk of this fourteenth-century building—as cold as a meathouse and as bare as a prison—was a rectangular garden with shade trees, stone steps, and a long alley, where the monks could walk and think in isolation.

  The friars made the best of the new accommodations. A library was restored on the second floor. A study room was connected to it and outfitted with pine reading desks, a few lamps, and a growing collection of nearly ten thousand books, including the latest works of natural history, geology, and astronomy (the Augustinians, fortunately, saw no conflict between religion and most science; indeed, they embraced science as yet another testament of the workings of the divine order in the world). A wine cellar was carved out below, and a modest refectory vaulted above it. One-room cells, with the most rudimentary wooden furniture, housed the inhabitants on the second floor.

  In October 1843, a young man from Silesia, the son of two peasants, joined the abbey. He was a short man with a serious face, myopic, and tending toward portliness. He professed little interest in the spiritual life—but was intellectually curious, good with his hands, and a natural gardener. The monastery provided him with a home, and a place to read and learn. He was ordained on August 6, 1847. His given name was Johann, but the friars changed it to Gregor Johann Mendel.