Social and Ethical Issues in Nanotechnology: Lessons from Biotechnology and Other High Technologies

Dianne Irving Comments
Reproduced with Permission

[Note: If society wants to have a seat at the biotechnology table, it must be a well-informed society, and one using the accurate scientific facts. To that end, this Biotechology Law Report is well worth reading, even though some would not agree with all points taken. Some brief excerpts are copied below. Issues addressed include: the urgent need for informed public input; the ever-present problem of laws not keeping up with science and technology; the use of nanotechnology to clone living human beings; and the failure of administrations, technocrats, politicians, scientists, technology companies, the military and academia due to their inherent conflict of interests which foster secrecy and false scientific data. We are there, now. -- DNI]


http://www.blankrome.com/publications/Articles/WolfsonNanotechnology.pdf

22 Biotechnology Law Report 376
Number 4 (August 2003)
© Mary Ann Liebert, Inc.

Social and Ethical Issues in Nanotechnology: Lessons from Biotechnology and Other High Technologies

Joel Rothstein Wolfson*


... Cloning and nanotechnology (pp. 13-14)

Nanotechnology can be used to clone machines as well as living creatures. Issues similar to those currently plaguing policy makers about biological cloning need to be raised early in the life of nanotechnology.

Proponents of nanotechnology postulate a world where DNA strands can be custom built by repairing or replacing sequences in existing strands of DNA or even by building the entire strand, from scratch, one sequence at a time. With enough nanorobots working quickly enough, one could build a DNA strand that will produce a perfect clone. The same issues will arise, or re-arise, if nanotechnology is successful in promoting cloning of DNA segments, cells, organs, or entire organisms.

... It is likely that nanotechnology's efforts will lead to twists in the assumptions that lead to the resolution of cloning issues in terms of genetic bioengineering. Policy makers should anticipate, now, that in setting the boundaries for bioengineered cloning, the need to foresee issues that will arise from cloning by nanotechnology and be ready to reevaluate cloning regulation before nanotechnology perfects its own methods of cloning. If we do not anticipate the nanotechnology problems, the debate will emerge in an environment like the current one: one filled with a frenzy and uproar, rather than in an atmosphere of reflection and deliberateness.

... Social policy and law always lag behind science

It has often been said that law breathlessly tries to keep up with scientific advances. This is likely to be the case in nanotechnology. In Chapter 13 of his book, Drexler makes a strong pitch for keeping policy makers out of the debate about nanotechnology and urges the institution of technical panels. He summarizes his argument this way:

Unfortunately, leaving judgment to experts causes problems. In Advice and Dissent, Primack and von Hippel point out that "to the extent that the Administration can succeed in keeping unfavorable information quiet and the public confused, the public welfare can be sacrificed with impunity to bureaucratic convenience and private gain." Regulators suffer more criticism when a new drug causes a single death than they do when the absence of a new drug causes a thousand deaths. They misregulate accordingly. Military bureaucrats have a vested interest in spending money, hiding mistakes, and continuing their projects. They mismanage accordingly. This sort of problem is so basic and natural that more examples are hardly needed. Everywhere, secrecy and fog make bureaucrats more comfortable; everywhere, personal convenience warps factual statements on matters of public concern. As technologies grow more complex and important, this pattern grows more dangerous.

Some authors consider rule by secretive technocrats to be virtually inevitable. In Creating Alternative Futures, Hazel Henderson argues that complex technologies "become inherently totalitarian" (her italics) because neither voters nor legislators can understand them.

Dr. Drexler sees two flaws with the present public policy framework. First, regulators have vested interests in maintaining their present power and the status quo. Second, secrecy and the incentive to cover-up mistakes by "technocrats" harm the formation of proper public policy.

Thus, Dr. Drexler proposes "fact forums" of scientific experts to replace the present public policy framework. He summarizes his approach as follows:

We need better procedures for debating technical facts - procedures that are open, credible, and focused on finding the facts we need to formulate sound policies. We can begin by copying aspects of other due-process procedures; we then can modify and refine them in light of experience. Using modern communications and transportation, we can develop a focused, streamlined, journal-like process to speed public debate on crucial facts; this seems half the job. The other half requires distilling the results of the debate into a balanced picture of our state of knowledge (and by the same token, of our state of ignorance).

Here, procedures somewhat like those of courts seem useful. Since the procedure (a fact forum) is intended to summarize facts, each side will begin by stating what it sees as the key facts and listing them in order of importance. Discussion will begin with the statements that head each side's list. Through rounds of argument, cross examination, and negotiation the referee will seek agreed-upon statements.

Where disagreements remain, a technical panel will then write opinions, outlining what seems to be known and what still seems uncertain. The output of the fact forum will include background arguments, statements of agreement, and the panel's opinions. It might resemble a set of journal articles capped by a concise review articleÑone limited to factual statements, free of recommendations for policy.

Unfortunately, despite the initial appeal of a scientist-driven public policy debate, society has learned that scientists are not always the best policy makers. A recent article in the Washington Post (May 27, 2002), about flaws in the swine flu vaccine program of 1976 illustrates this problem. The swine influenza epidemic of 1918-1919 claimed the lives of between 20 and 100 million people, so when the virus reappeared in 1976, public health officials took quick action.

The consensus of the majority of medical experts was that an epidemic was likely andthe side effects of a vaccine small. The Post notes that, "According to various accounts, the idea that a swine flu epidemic was quite unlikely never received a full airing or a fair hearing, although numerous experts apparently held that view. . . . A few experts suggested the vaccine be made and stockpiled but used only if there was more evidence of an epidemic. This was considered but rejected early on. The argument was that the influenza vaccine had few, if any, serious side effects, and that it would be far easier (and more defensible) to get it into people's bodies before people started dying." That is, the Centers for Disease Control, on the basis of the input and consensus from medical experts, concluded that there was a "strong possibility" of a swine flu epidemic and that "the chances seem to be 1 in 2."

In fact, the epidemic never emerged, and the experts were very wrong - the vaccine had severe side effects, the worst being a nerve disease known as Guillain-Barré syndrome. The article notes, "On Dec. 16, the swine flu vaccine campaign was halted. About 45 million people had been immunized. The federal government eventually paid out $90 million in damages to people who developed Guillain-Barré. The total bill for the program was more than $400 million." The article ends with lessons from Harvey Fineberg, a former dean of Harvard's School of Public Health, "Among them: Don't over-promise; think carefully about what needs to be decided when; don't expect the consensus of experts to hold in the face of changing events. The biggest, he said recently, was perhaps the most obvious: Expect the unexpected at all times."

The point here is that although scientific input and expert panels, perhaps even the "fact forums" proposed by Dr. Drexler, are vital to an informed public policy debate, the politics within academia, the push for consensus in panels despite minority views, the rapidly changing opinions about scientific issues on the basis of new evidence, and the fact that one needs to expect the unexpected at all times lead to the need to involve others in public policy debates.

Moreover, public policy involves more than scientific truth: it involves a balancing of competing societal needs and goals. Broader goals, such as the allocation of scarce resources among competing technologies and non-technology needs, the weighing of costs and benefits in pursuing particular projects, whether certain technologies should be regulated or banned, whether certain bad activities should be made criminal or should be regulated, and who should bear the legal liability for damages caused by the failure of technology, are all issues that are beyond the expertise of technical panels but are vital to the conclusion of a rational public policy debate.

On the other hand, it is important for the nanotechnology community to educate the public and policy makers early about important aspects and characteristics of nanotechnology so that the debate on public policy is not tainted by those who slant the scientific facts in the heat of the debate in order to persuade. Similarly, schools, universities, and governments must undertake programs early to educate themselves and their students or employees on the science of nanotechnology.

Long-term social effects of the success of nanotechnology

If its proponents are correct, nanotechnology could have vast and sudden impacts on our society.

Policy makers and society need to consider responses to such profound effects. This paper illustrates only two examples.

Nanotechnology might increase dramatically the life expectancy of human beings through diagnostic or treatment nano-machines, improved drugs, or DNA repair. This is often seen as a purely positive outcome. However, a sudden increase in the life expectancy of a large number of people will likely mean that the carrying capacity of cities, countries, and perhaps even the entire world will be exhausted in supporting currently living persons. This would mean that new births would have to be controlled.

Further, longer productive lifespans mean that key power positions in government, academia, and corporations will not be turning over in their normal manner. As a result, we need to consider the effects on society of a slower turnover of power to the next generation. One of the great advantages of new children is that they introduce new ideas and challenge existing norms. It is said that some of the greatest scientists completed their greatest contributions before the age of 30. Moreover, children grow up to accept as natural things that their parents found impossible to live with. For example, racial integration in jobs and the military, and even interracial marriage, seen a generation ago as an idea that might tear apart the United States, is now accepted as fact by most children. Similarly, the use and acceptance of new technologies, such as computers, is far more prevalent in children than in their more senior counterparts.

If the proponents of nanotechnology are correct, nanotechnology will mean that computers will finally think like human beings. As they envision it, nano-machines will either be small enough to become fast enough to break the barrier into "consciousness," or nano-machines will build biological computers that will mimic the way in which brains think and grow. In either case, if they are correct, we need to come to grips with the effects of conscious computers on society. Will humans find productive things to do with their time and energies if computers can take over their jobs? Who will control whom? Will computers have the ability to rebel against humans? Will computers dominate and eliminate humans and other "living" things? These science-fiction questions will have a greater impact if the most optimistic projections of nanotechnology come true.

...Inherent conflicts of interest between research or commercial exploitation and disclosure or sharing of results

Recently, there have been a number of scandals involving failure to timely report incidents in human clinical research. For example, Washington Post reported that "The University of Pennsylvania announced yesterday that its gene therapy institute, which has been an international leader in the cutting- edge field of medical research, will no longer experiment on people."

The Post noted, "The university's action came after the Food and Drug Administration found that Wilson had not properly reported the deaths of experimental animals or serious side effects suffered by volunteers who preceded Jesse Gelsinger, the Tucson teenager who died Sept. 17 after undergoing an experimental therapy for a rare metabolic disorder." These incidents are not limited to the University of Pennsylvania. The same article continues, "In addition to Penn's problems, the field - which tries to cure disease by giving people healthy copies of "disease" genes - has been rocked by revelations that researchers elsewhere weren't properly reporting the deaths and illnesses of hundreds of volunteers to the National Institutes of Health as required by federal regulations. . . .

Most recently, the FDA shut down four gene experiments by a prominent researcher at Tufts University and cited him for numerous safety lapses, including the failure to tell his own institution about the death of a volunteer and the inclusion of patients who did not qualify and may have been harmed by the experimental treatment." Six months after this article was published, the Washington Post reported that "A Harvard-affiliated hospital in Boston quietly suspended a gene therapy experiment last summer after three of the first six patients died and a seventh fell seriously ill, previously unreleased research records show. Richard Junghans, the Harvard Medical School researcher who led the study, blames the problems on a series of tragic coincidences that were mostly not related to the treatment.

But the federal committee that oversees gene therapy had no chance to question that conclusionÑor share it with other scientists working on similar experiments - because Junghans did not report the deaths or illness to the National Institutes of Health when they occurred, as required by federal regulations."

These incidents illustrate the point that researchers, commercial and academic alike, have inherent conflicts of interest that cause them to fail to conduct research properly, to report failures, and to admit mistakes. The proponents of nanotechnology have argued that scientists can police themselves and can be trusted to adopt and use safe experimental methods and to report incidents. Society has learned from experience that even the best-intentioned researchers do not always follow safe protocols and report adverse events.

Related to this conflict is the conflict that arises during commercial exploitation of a new technology. Commercial exploitation of science inherently requires that the researcher keep confidential the outcome of his or her research in order to provide the researcher (or his or her employer) a competitive advantage over competitors and to keep competitors from "free-riding" on the results of this very expensive research. This is not inherently a bad thing. Being able to keep important developments secret until they are ready to be marketed and sold to the general public gives researchers an important incentive to continue to do leading-edge research.

On the other hand, as seen in the medical field, this has sometimes meant that drug side effects or bad interactions are not timely disclosed to regulatory agencies or the public. The same conflict could affect nanotechnology research.

Finally, there is a related conflict of interest problem where the scientist has a financial stake in the outcome of the research. For example, as the Washington Post reported recently on its front page on June 30, 2002, "One of the nation's largest cancer centers enrolled 195 people in tests of an experimental drug without informing them that the institution's president held a financial interest in the product that stood to earn him millions. The tests at M.D. Anderson Cancer Center in Houston involved Erbitux, the controversial cancer drug that is at the center of broad investigations in New York and Washington. Most of the patients, who were quite ill by the time they enrolled in the tests, have died. The cancer center, a unit of The University of Texas system, has since acknowledged that it should have informed the patients of the conflict of interest involving its president, John Mendelsohn. It has recently adopted policies to ensure that patients are told ahead of time if Mendelsohn or the cancer center itself has a financial stake. Ethicists say that such conflicts of interest pose risks to patients and to the integrity of scientific studies."

Conflicts of interest are not new, but they do pose a societal risk. Policy makers and regulators need to be proactive in evaluating ways to ensure that these conflicts of interests do not keep societal risks hidden from them.

... Nanotechnology as a terrorist weapon

Because of its microscopic size, easy dispersal, self-replication, and potential to inflict massive harm on persons, machines, or the environment, nanotechnology makes a tempting terrorist weapon. Since September 11, 2001, concerns about the conversion of useful machines into terrorist weapons has been heightened. If rogue states and groups can acquire biological and chemical weapons of mass destruction, then surely they can learn to use nanotechnology.

... Nanotechnology might someday permit one to assemble, molecule by molecule or chain by chain, any compound one desires. Thus, terrorists could use nano-machines to assemble pure mixtures of dangerous toxins, even if they have no access to the underlying living creature that normally creates that toxin or to the raw material needed to produce the toxin. That is, at least theoretically, a nano-machine could build the anthrax toxin, molecule by molecule or at least chain by chain, in great abundance, even if the terrorist had no access to the spore-forming bacterium Bacillus anthracis. The recent success at assembling the polio virus, without even the aid of nanotechnology, well illustrates this threat, as does the initiation of a project to remove the genes from Mycoplasma gentilatium and replace them with a pared-down and artificially constructed string of DNA with just enough genetic material to get the cell going again. Terrorists could take relatively innocuous forms of a toxin or chemical and, by making a small addition to or deletion from the natural structure, change it into one far more dangerous.

Inadvertent release or inadvertent spread of nanotechnology

As we have learned with other technologies, scientists had thought they had proven methods to prevent the inadvertent spread of biotechnology into the wider environment. They were wrong. Nanotechnologists face these same risks.

Well-intentioned and expert bioengineering scientists were confident that genetically engineered plant seeds would not be able to migrate into non-engineered fields and would not enter the human food chain by accident. They were wrong. Genetically altered seeds and products have been discovered in human foods (such as taco shells), and seeds intended for animal feed were planted by farmers and spread into fields of non-engineered crops. Similarly, kernels from an experimental corn plant altered to produce a pharmaceutical product may have contaminated a subsequent soybean crop intended for human consumption.

Likewise, food experts were confident that they could control or exclude the disease agent that causes "mad cow" disease from human food chains. They, too, were wrong. That failure has led to mass animal kills causing enormous social and economic costs for farmers and society and has had wide-ranging effects, including an erosion of public confidence in government and in the current means to ensure safe food supplies.

Nanotechnologists argue that inadvertent spread will not happen because nano-machines need a confined source of power, like a battery. They argue that any inadvertent release is not likely to have significant detrimental effects, because the nano-machines will simply run out of energy quickly. This assumption may be na•ve. Scientists have already postulated that nano-machines could be built to rely on energy sources from the environment around them. Moreover, as lovers of electronic gadgets know, batteries are becoming better, and power requirements are lessening. As a result, while a nanomachine may eventually fail for lack of power, millions of them, inadvertently released, could do great damage before that eventuality came true.

It is important to keep in mind that the risk of the inadvertent spread of nanotechnology is less of a concern in the near term because most nanotechnology is in the early experimental or developmental stage. Just as scientists have been working with deadly pathogens in laboratories across the world for a long time and have established effective protocols that protect researchers and the general public from the inadvertent escape of these pathogens from facilities that study or genetically alter them, research protocols should be able to protect the public from an inadvertent spread or release of nanotechnology during the developmental stages. The Foresight Guidelines on Molecular Nanotechnology is one attempt to establish principles to guard against the inadvertent release of nanotechnology. Nonetheless, inadvertent release or spread of nanotechnology during deployment remains a serious risk. Scientists and policy makers need to keep the risk in mind and devise coordinated contingency plans to deal with the eventuality.

... Military funding and directed research can distort scientific research

The military is an enormous funder of scientific research. However, the mission of this funding is not the basic advance of science but the development of science that can produce weapons, detect the enemy, or protect troops against an enemy attack.

... Military funding poses both personal ethical and societal challenges. Personally, scientists involved in nanotechnology need to be aware of, and come to grips with, the fact that their own research may lead to the production of weapons of mass destruction. A number of scientists involved in the development of the nuclear bomb, in retrospect, found that knowledge hard to live with.

Military funding can also have distorting effects on the progress of science as a whole. Scientists need to gain funding for their research. They need to prepare grant proposals that win funding approval. Thus, they naturally tailor their proposals and research to areas that will catch the attention of the granting organization. In the case of the military, they need to slant proposals to weapons development.

In some cases, the funding organization tells the researchers what types of proposals they are looking for. In such cases, it is obvious how scientists must alter, or at least tailor, the focus of their research to meet the goals of the request for proposals. In other cases, the proposals are more open, but again, the scientist must write to his or her audience and propose projects that will win approval.

In many cases, one can look at proposals that have won in the past and follow that well-trodden path. In this way, even when there is an open call for proposals, the proposals that are submitted are distorted by the knowledge that they being submitted to a military organization.

But this structural distortion is not unique to military funding: it happens in the growing area of directed research. There has been an increase in corporate and other directed research. This is not necessarily a bad thing: particularly with governmental sources of R&D and basic scientific research being reduced, corporate funding of basic science is welcome. Moreover, a partnership between those with important scientific expertise and those who are producing actual products and services can yield significant results.

On the other hand, certain directed research by tobacco companies has been cited as an example of corporate research money being used to try to advance bad scientific positions in order to ward off or counter commonly held scientific principles

The tendency for directed money to distort otherwise-objective views cannot be denied, just as it does when it comes from the military or other sources (including nonprofit advocacy groups) that seek a particular outcome for the research.

Society must come to grips with the good and bad effects of directed research. Although this is not a topic unique to nanotechnology, it is one that can have the effect of distorting or inappropriately redirecting science onto paths that are not in society's best interests.


FAIR USE NOTICE: This may contain copyrighted (© ) material the use of which has not always been specifically authorized by the copyright owner. Such material is made available to advance understanding of ecological, political, human rights, economic, democracy, scientific, moral, ethical, and social justice issues, etc. It is believed that this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, this material is distributed without profit to those who have expressed a prior general interest in receiving similar information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml If you wish to use copyrighted material for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.

Top