acm - an acm publication

Articles

What if the experts are wrong?

Ubiquity, Volume 2007 Issue January | BY Denise Caruso 

|

Full citation in the ACM Digital Library


"The question is, how do you prepare to be wrong? If you know you can't walk away from the consequences of what you do, how do you not screw it up?" said Todd La Porte, sitting across from me in the dappled light of the faculty dining room at the University of California, Berkeley. La Porte, a former Marine, is also a veteran political scientist who is internationally known for his thoughtful study of "long-term stewardship" of man-made hazards; that is, how a society prepares to take care of the messes it has made that it can't get rid of, generations into the future.
   La Porte has spent many years studying how nuclear engineers and scientists go about the business of containing radioactive waste, which to date is the most persevering toxic substance known to (and created by) man. I had contacted him when I first started my research into risk and genetic engineering. It occurred to me that if something bad happened as a result of our self-assured release of transgenic organisms throughout the world, we might eventually need to have a more intimate understanding of his work. For starters, as La Porte noted, "nuclear waste doesn't reproduce." A population of living, multiplying transgenic organisms gone awry could end up being significantly more difficult to contain than radioactive sludge. Should such a thing happen, it would create stewardship challenges for generations into the future that are already far beyond our present scientific knowledge or capabilities.
   And while the thought of being wrong about having stocked the entire planet with self-replicating hazards was sobering enough, La Porte posed yet another, equally troubling question about the topic of my inquiry: "How are you going to get the scientists to listen to you?"
   After decades of study, La Porte himself had no answer. "My experience with technical people, with scientists, is that they're utopians and they see us as the problem," he said. "This was a tragedy in the nuclear industry."
   Nuclear scientists, said La Porte, entered their profession believing they were doing something good for the world by developing what was then called "atomic energy." Many of us remember this era, when nuclear energy was pervasively (and now infamously) touted by the nuclear industry as 100 percent safe, clean and "too cheap to meter." The scientific basis for those claims was accurate as far as it went, but clearly it didn't go far enough. When the industry's claims of safety literally blew up—with operator and engineering errors triggering the meltdowns at Three Mile Island in 1979 and Chernobyl in 1986—the public rejected the technology as too risky for the benefits promised by its government and industry champions.
   A new focus on our global dependence on fossil fuels has some people trying to salvage nuclear energy's reputation. But nuclear power plants are still considered too vulnerable to human error to be reliable, and the issue of how to safely store and safeguard radioactive waste remains a weighty and so far intractable problem, for both public health and global security.
   "To think that other people might suffer as a result of their actions is not part of the expert's world, or it gets pushed away in the drive to deploy the technology," said La Porte. "But what are the consequences if it turns out that all the things they believed in are wrong? That's really hard. And most technical people can't talk about this. What they do is theology to them, not science.
   "This attitude used to be less of a problem because we couldn't destroy the earth," La Porte continued. "But now we can. The consequences of error are likely to be greater than they were. The power of technology is much greater. The capacity for untoward error is much greater. And the effects of technology at scale have not been tested in the area of biology."

This untested technology is, of course, biotechnology. Using a laboratory technique known as "recombinant DNA," scientists now can splice together the genetic material from deep within the cells of two or more organisms of different species. As a result, they can "engineer" living hybrids with new traits that would have been impossible to create using traditional breeding techniques.
   Genetic engineering commenced what was heralded as a new era, both of scientific discovery and commercial potential. The technique itself was quickly patented, and the first biotech company, Genentech, Inc., was launched in 1976. A torrent of research and experimentation followed, and a new generation of genetic engineers immediately began to add, remove or otherwise modify the DNA of all kinds of living things. The term "genome" had long since been understood to describe all the genes in an organism. But this newfound ability to directly manipulate individual DNA sequences to change the way that organisms behave provided new impetus to discover and map as many genes and their functions as possible. In 1977, for the first time, the entire complement of genetic material in a biological entity—a virus that kills bacteria, called a bacteriophage—was mapped and published.
   Many more genomes were mapped and published in subsequent decades. But the climax of these efforts was the dramatic completion of a working draft of the human genome map in June 2000. For many people, this historic achievement—combined with the power of recombinant DNA to re-engineer the structures and behaviors not just of microbes, plants, and other animals, but of humans as well—inspired researchers to dream big about how humankind could use this knowledge.
   And like La Porte's nuclear utopians, dream big they have done. Over the past several decades we've heard an ongoing stream of promises about how genetic engineering and its products are on their way to eliminating infectious disease, ending world hunger and even repairing the tremendous damage we've wreaked upon the biosphere.
   But what we know from history is that every promise based on discovery or invention, no matter how positive, comes factory-equipped with its own unintended dark-side consequences. For all the utopian results that genetic engineers have imagined for us, the ability to "rewire" the genetic material of living organisms could just as plausibly yield an equal and opposite nightmare. It is not especially difficult to come up with scenarios whereby mucking around in the genes of living organisms leads to serious biological, social and/or economic disruption. Yet neither knowledge of history nor dark-side scenarios has tempered the zeal or the speed with which the products of genetic engineering are being dispatched into the global marketplace.
   Are the experts who build these products thinking critically about these dark possibilities? What set of facts, based on what specific scientific knowledge, have they provided to government regulators who decide whether the products of genetic engineering are safe? Do either the scientists or the regulators know enough about what they're doing with this largely unexplored science to speed biotech products to market as quickly as they are today?
   As sensible and rational as these questions sound to most of us, it turns out that this is a most unwelcome line of inquiry from the expert perspective. Ask the people whose livelihoods are intertwined with science and technology about the risks of what they do or sell—particularly if they are at or near the levers of power in academia, industry or government regulation—and you'll generally get a scorching look of suspicion and almost always at least one (and sometimes all) of the following three reactions:
   "People are ignorant. This technology is absolutely safe."
   "The public is scientifically illiterate. There's no point involving them in the conversation; they just get scared and stop us from doing our work."
   "The problem is that people just don't understand risk."
   On the surface, there's no denying public ignorance. Making that case is like shooting fish in a barrel. Not many people even know how their televisions work, let alone how scientists can "engineer" the DNA that resides deep within the cells of all living things. But this is a terribly elitist argument. As the Canadian philosopher John Ralston Saul wrote, "When faced by questioning from non-experts, the scientist invariably retreats behind veils of complication and specialization, [making] it impossible for the citizen to know and to understand, and therefore to act, except in ignorance." What's more, the claim that ordinary people are incapable of understanding the risks of scientific and technological interventions has been proven to be patently untrue, time and again, by risk researchers.
   This tacit refusal to directly address the public's concerns about risk obscures many larger and more profound truths about the process of scientific inquiry and discovery, truths that are rarely acknowledged in the context of how this process affects expert assessments of risk. Using the false pretense of public ignorance as a shield turns the public's legitimate—and generally quite relevant—questions into a dangerous game of "us versus them," when far more complex factors than public ignorance are at play.

To begin with, those who discover, invent or work with new technologies are often spectacularly nearsighted about the risks those technologies create. To deny this is to ignore at least a century of the history of biology and technological advancement.
   The tragedy of the drug DES, for example, continues to reverberate through generations. As many as 10 million pregnant women in the U.S. alone took diethylstilbestrol, a synthetic estrogen, between 1940 and 1971 (despite several studies that proved its ineffectiveness) hoping that it would prevent miscarriages. But in 1971, researchers discovered the link between daughters of DES mothers and what was until then a rare cancer: clear cell adenocarcinoma.
   Animal studies a decade earlier had signaled possible links between early estrogen exposure and later cancers in offspring, yet these findings had been dismissed and considered irrelevant to human health by doctors and drug makers, as well as by the U.S. Food and Drug Administration (FDA), which had approved DES. But even after human studies made the linkage irrefutable, researchers had to fight to get colleagues in the scientific and medical communities to believe the proof. Remarkably, the skepticism continues, even as many more problems have surfaced in the subsequent decades, some of which also affect sons of DES mothers. Research now shows that even the children of DES children are at high risk for cancer and other DES-related health problems.
   Other, similar long-term disasters may be looming as researchers are discovering serious health implications for people exposed to possibly the most ubiquitous man-made substance in our lives: plastics. Studies were released in 2003 on three specific plastics* that are found in virtually everything around us: food containers and cooking utensils, clothing, cars, furniture, medical and dental appliances, paint, and even bubble gum. Population samples show that as many as 92 percent of Americans may have traces of one of them, PFOA, in their blood. The problems that all three of the chemicals are suspected to cause range from kidney, reproductive and brain development problems to thyroid cancer. The millions of children exposed to DES pales in comparison to the multitudes worldwide who have been exposed to any one, let alone all three, of these plastics.
   As has become standard practice, industry groups representing the manufacturers of these chemicals continue to insist that they are safe, and hold up as "proof" the fact that regulatory agencies have not yet taken action against them.,,, (One exception is PBDE; in 2004, the U.S. Environmental Protection Agency (EPA) finally negotiated a phase-out of this chemical as a result of pressure by consumer groups.)
   Another similar health crisis is already well under way as a result of our overuse of man-made antibiotics, namely, the steep increase in antibiotic resistance that many dangerous pathogens have developed as a result.
   Antibiotics were once considered miracle drugs that, for the first time in history, greatly reduced the probability that people would die from common bacterial infections. But once these new drugs became cheap and readily available, doctors prescribed them for virtually every ailment, often thoughtlessly or incorrectly. As a result, bacteria became immune to the drugs that once killed them.
   Resistance to antibiotics has become pervasive among pathogens that infect people and animals all around the world. In hospitals in particular, patients often contract "superbugs," like Staphylococcus aureus or pneumonia, which now are virtually unkillable. Staph infections, for example, are already resistant to common antibiotics like penicillin, methicillin, tetracycline and erythromycin. As a result, these low-cost treatments have become practically useless for common infections. This leads to more frequent use of newer and more expensive compounds, which in turn leads inexorably to the rise of resistance to the new drugs as well. A never-ending, ever-spiraling race to discover new and different antibiotics has ensued, just to prevent losing further ground in the battle against infection.
   The situation is worsened by the fact that the genetic material responsible for conferring antibiotic resistance can move with relative ease between different species of bacteria. This is evolutionary selection in action: the transfer of resistance makes it possible for pathogens never exposed to an antibiotic to acquire resistance from those that have been and thus survive. (Antibiotic-resistant genes play an important role in genetic engineering as well, as you'll see.)
   Another great concern is that in the United States, antibiotics are still routinely included in the diets of healthy livestock, for no reason other than to make the animals grow faster. But now the bacteria the animals harbor have become widely resistant to antibiotics, too. It has been well documented by the U.S. Centers for Disease Control and Prevention (CDC) and by the U.S. FDA that since the time these farm animals were first fed medically unnecessary doses of antibiotics, the meat supply has become highly contaminated with bacteria. What's more, foodborne illness has become a much more serious problem - especially illnesses caused by Salmonella, Campylobacter and E. coli, pathogens that are resistant to nearly all antibiotics. In addition to the issue of foodborne illness from contamination, the resistant bacteria get passed along to humans who eat resistant animals, like chickens and cows, or their products, like eggs and milk.
   As a result of this growing problem, many countries have long since banned the use of antibiotics for growth promotion or disease prevention. In the U.S., however, it took until March 2004 before the FDA disallowed just one single type of antibiotic— enrofloxicin—that was widely used in poultry. Enrofloxicin in animals metabolizes into ciprofloxicin, a.k.a. Cipro, the drug that made headlines in 2001 as the treatment of choice for humans who inhale anthrax spores.

Scientific shortsightedness does not apply only to products, but to discoveries as well. It can be hard to measure how much scientific progress is held back by a research community too mired in its prejudices to accept truly revolutionary discoveries. The history of science is full of such examples, but a relevant one for the 21st century is the story of Stanley Prusiner, a neurologist at the University of California, San Francisco. Prusiner lost much of his funding, his academic tenure (temporarily), and, for many years, credibility in his field—all for research that would later bring him a Nobel Prize in Medicine. In 1982, he discovered a strange misfolded protein that he called a "prion" (he derived the word from protein and infectious) and that apparently could transmit disease.
   In fact, it is now widely known that prions can transmit disease. They are the infectious agent that causes the brain-wasting disease in animals known as transmissible spongiform encephalopathy (TSE). In its variant forms, it's known as bovine spongiform encephalopathy (BSE) or mad-cow disease in cattle, scrapie in sheep and "variant Creutzfeld-Jacob Disease" (vCJD) in humans. TSE was also recently discovered in goats and deer, and it affects other animals as well, including squirrels, elk and mink. In fact, an ongoing TSE epidemic in Colorado and Wyoming is exposing both cattle and hunters who share the same terrain to TSE. No human or bovine infections have been documented, but our lack of understanding of the disease and its long incubation period—it can take from three to 50 years for symptoms to appear—continues to be worrisome to many, despite reassurances from the USDA.
   The research community had believed that these brain-wasting diseases, which Prusiner had traced to prions, were caused by viruses. A virus is a parasite with no cell of its own, so it has to "hijack" the DNA in the cells of another organism in order to reproduce and become infectious. A virus can do this because it contains the machinery of reproduction, i.e., DNA or the RNA molecules that help decode the information carried by DNA. But prions do not contain DNA or RNA. In fact, prions are the only known infectious agents that don't contain DNA or RNA. Bacteria contain DNA. So do fungi, parasites and protozoa. So when Prusiner isolated the infective protein particle, scientists simply refused to believe that it contained no genetic material. In fact, more than 20 years after his discovery, some definitions still call prions infective particles "which (almost certainly) do not have a nucleic acid genome."
   Despite Prusiner's prior achievements, many scientists in the research community also discounted his more recent claims that prions reside not only in the spinal cord but also in the muscle tissue of animals that we eat. Yet in 2006, prions had been, in fact, discovered in many other parts of animals, including the muscle tissue of North American deer and elk. And these scientists are presently rejecting his ideas about the relationship of prion diseases to other disorders, such as Alzheimer's and Parkinson's diseases. Time will tell who prevails.
   Scientists don't hurt only themselves with this kind of behavior. They hurt us, too. By refusing, for whatever reasons, to look beyond the narrow boundaries of their own expertise, they often have overlooked the cause of problems as well as potential cures or solutions. Prusiner himself best sums up what scientists risk by indulging in these dogmatic attitudes: "While it is quite reasonable for scientists to be skeptical of new ideas that do not fit within the accepted realm of scientific knowledge," he wrote with great understatement in his Nobel autobiography, "the best science often emerges from situations where results carefully obtained do not fit within the accepted paradigms."

What's even more distressing is how frequently scientists reject these "results carefully obtained" when they actually do fit within the bounds of paradigms they understand. This type of scientific myopia may be closest in spirit to the issues in question around genetic engineering. It is also where we find what may be the most persistently damaging effect of shortsightedness: the destructive and exponential growth of invasive species.
   An invasive species is any plant, animal, microbe or virus - including any of the biological components that can propagate them, like seeds, eggs, spores or infectious bits - that's not native to a given region or ecosystem and that out-competes native species for space and resources.
   While humans are almost always responsible for such invasions, not all invasive species are purposely introduced. They've been known to stow away in ballast water of ships, in crevices of airplanes, or attached to the clothing or shoes of unsuspecting travelers. One of the most rampant accidental invaders was never meant to live outside a fish tank. This human-bred "aquarium strain" of green algae called Caulerpa taxifolia has already literally choked the life out of tens of thousands of acres of seafloor around the world.
   Everything that was good about this plant in a fish tank turned out to be an ecological disaster in the wild. It was bred to tolerate cold water and it grows fast (up to a full inch per day). Because even a small broken-off fragment can form a whole new plant, it hitches a ride on boat anchors and fishing gear and quite easily starts up new colonies at great distances from the original source. What's worse, outside of the native strain's tropical home, no fish will eat it because it produces toxins that taste bad.
   There's a lesson here, both about the banality of evil and about the consequences of technology at scale, which should be carried forward into any conversation about genetic engineering. Who would have imagined, before Caulerpa, that such far-reaching damage could be caused by an activity as everyday as dumping the contents of an aquarium down a drainpipe?
   People make mistakes and accidents happen, of course; unintended infestations by alien plants, animals and microbes are one of the risks of global mobility. But even more distressing are the invasive species that were purposely introduced. Time and again, various government agencies and people with the best of intentions, having had quite enough of one pest or another, have imported various critters to combat them. These immigrants, deliberately introduced, often become pests far worse than those they were brought in to eradicate.
   One notorious example is the Hawaiian cane toad, Bufo marinus, brought into Australia in 1935 to rid its sugarcane plantations of cane beetles. The brains behind this idea was the Australian Bureau of Sugar Experimental Stations, which apparently didn't ask Bufo for references before hiring. The fact is that this toad has an immense appetite for everything but the cane beetle. It is big and aggressive, and its skin is poisonous to any natural predator—except one lone snake species, which is destined never to hunger again. Worse yet, the tadpoles of cane toads mature earlier than other tadpoles in Australia, so in addition to being nasty-tasting to potential predators, the hungry babies also eat up everyone else's food.
   With these unnatural advantages, it didn't take long for the toads to spread along the north coast toward the center of the continent, eating all the native amphibian and invertebrate species in their path—except, as noted, the cane beetle, which flies over its head, and cane grubs. While the grubs are at least within reach, the toads apparently cannot be bothered with them, since they live below the soil and require at least a token amount of effort—effort that's quite unnecessary given the toads' luxurious circumstances.
   Similarly, the introduction of the European rabbit to Australia as a game animal proved to be a mistake of magnificent proportions, and the proposed solutions are proving even more frightening than the original invasion. By most historical accounts, 24 wild rabbits arrived in Australia from England on Christmas Day in 1859; 10 years later, some 2 million per year were being shot with no noticeable effect on the population. The original colony may have produced more than half a billion rabbits on the continent, destroying vast tracts of vegetation and contributing to the extinction of many native marsupial species, like the bandicoot.
   If we were looking for evidence of scientific hubris, we need look no further than two subsequent steps in the rabbit saga. Australian scientists decided they would try importing diseases into the rabbit population to kill them. Both of the diseases they were considering as imports were also not native to Australia and were supposed to be spread only by rabbit-to-rabbit contact. Instead, as the scientists found out wholly by accident, what spread the diseases (myxoma in the 1950s and rabbit hemorrhagic disease virus in 1995) were biting and stinging insects such as mosquitoes. For some unfathomable reason this was unforeseen by researchers, despite the fact that mosquitoes and other similar biting and stinging insects are among the best known disease vectors in the world. Luckily, the two diseases weren't like West Nile; that is, transmissible to humans or other animal populations by mosquito. Otherwise, the scientists might have had an epidemic on their hands.
   A cautionary tale written by two scientists notes that the two attempts to introduce diseases to try to control the rabbits are distressingly similar:

   ...In each case, because of errors in our assumptions, the organism successfully escaped rather than being [purposely] released ... The pattern of invasion did not match that predicted from available information, as the proposed mode of infection was not that used. The speed of the spread was also much faster than expected and was unstoppable.
   The unexpected behaviour of all three species places a serious question mark over our ability to predict the behaviour of invading organisms placed in ecologically different environments, and thus to protect a naive environment from an invasion. The lessons that should have been learned from the escape of myxoma were not accepted, and RHDV escaped in a virtually identical manner. Have we learned our lessons yet, or can we expect similar escapes in the future?

Based on recent history, we can hazard an answer to both those questions: No, we haven't; and yes, we can.
   At various times Australian scientists have tried to import viruses from other countries, including Venezuela, as biological controls for cane toads as well - which, considering that the toads were intended to be a biological control themselves, is something akin to fighting fire with gasoline. A research organization established nearly a century ago to benefit Australian industry claims to be working with "cutting edge genetic technology" to find a biological control method to "stop the hop" of cane toads across the continent. These types of efforts continue despite the fact that in 2001, researchers in Canberra, working to create a genetically engineered sterility vaccine to control a national infestation of mice, instead accidentally created a "mousepox" virus so powerful that it killed even the mice that had been inoculated against it. (A U.S. team of researchers immediately replicated it, in the name of biological defense.)
   While public ignorance is most often cited as the reason that risk is so misunderstood, these are examples of scientific ignorance. In each of these situations, the proposed intervention was subjected to some degree of regulatory and/or scientific scrutiny. Each received some degree of pushback from concerned scientists or the public that questioned assumptions of safety—often in the form of data that refuted their proposed actions—and those involved in making the decision had an opportunity to revise their "beliefs" (I use the term advisedly). Instead, they ignored the pushback and declared the interventions to be safe, or safe within what were believed to be easily defensible and understood boundaries.

Another factor in the public's relationship to risk, which one could probably call unintended public ignorance, affects us more often than we can know. Unintended ignorance results when regulatory agencies or industries willfully downplay or deny the risks that are already known to them, in the interest of protecting financial or some other kind of gain. In the early 21st century, this game of "hide the risk" has already reached epidemic proportions in the United States at least.
   Public-interest groups in the U.S. have railed for decades about the dangers of the revolving door between government and industry, whereby people with a financial interest in a given industry—industries that generally provide largesse in various forms to those in power—are asked to serve as regulators of that industry. The practice has become increasingly common and bold, and as a result, American citizens are witnessing an ongoing rollback of hard-fought federal safeguards in agencies that regulate food safety, the quality of drinking water, worker health and safety, civil rights, toxic pollution, health care and other common public resources.
   The most blatant recent examples in the U.S. have involved the government censorship of EPA reports that connected auto emissions and other human activities with global warming; EPA administrators selectively editing a risk analysis that the agency commissioned on mercury emissions; the sabotaging of a World Health Organization initiative on obesity because the sugar and packaged food industries felt "attacked" and opposed its suggestions; and the stacking of a CDC committee with industry-friendly experts to re-examine federal standards for lead in school drinking water.,,,
   With just these few various historical examples in mind, it's not surprising that people don't trust the information supplied by their governments or the scientists who advise them about the risks and benefits of genetic engineering. If we want to get the straight story, we need to look closely at two key areas. First, we must ask whether we are getting the whole story about how much scientists truly understand the biological processes that they are altering via genetic engineering. Second, where risk itself is concerned, we need to ask if we are getting the quality of analysis we deserve from our government regulators about those genetic alterations and the biotech industry that sells the products that result.

COMMENTS

POST A COMMENT
Leave this field empty