Silhouette of two heads written in binary code.

AI’s Ability to Manipulate Decision Making Requires a Moratorium on Its Use in Obtaining Consent for Biomedical Research

By Jennifer S. Bard

Until more is known about how to mitigate the threat of AI-fueled persuasion in human subject research, a ban on its use is the only reasonable way to keep the promise that we, as a country, made to those harmed while participating in research.

The federal government’s commitment to assuring that participants in human subject research provide fully informed consent dates back to the U.S Public Health Service’s (USPHS) Syphilis Study at Tuskegee, whose revelation to the public sparked today’s legal schema for regulating and protecting the research it funds (the “Common Rule”), as well as research done in anticipation of receiving FDA approval for marketing a new product.

This commitment to research participants has been re-affirmed consistently since the laws were originally passed in the late 1970s. In 1996, then-President Clinton apologized to Tuskegee survivors and make a pledge to their families that the government would work rebuild “the broken trust” by committing to the ethical principles incorporated in today’s protection laws to “mak[e] sure there is never again another episode like this one.”

In 2017, Congress acted again to update the Common Rule and strengthen the provisions related to consent. While the U.S. Food and Drug Administration (FDA), the agency responsible for protecting participants in commercial drug trials, is not a signatory to the Common Rule, Congress has required that the FDA harmonize the protection it offers participants so that it is as least as strong as the Common Rule.

Yet, for all the highly public concern about AI technology’s ability to generate text and create “deep fake” images, less attention is being paid to its demonstrated ability to contaminate the process of human decision-making. Those seeking to influence decisions that people believe they are making on their own, such as which car to purchase, which show to watch, or which candidate to vote for, can buy AI technology to achieve that result in ways that are not just undetectable but essentially impossible for even those who write the code to explain. The software learns as it is used, and can make continuous adjustments during an interaction with a target.

For example, in 2022, a team of researchers announced that they had created a program to exploit “human choice frailty” which, over a series of interactions, trained humans interacting with it to “prefer” a choice the researchers designated in advance, which would otherwise not be in their best interest.

Because this technology is so new, the vocabulary used to describe it as advertising or marketing lacks the depth to articulate what’s really going on.

The European Union has been sufficiently concerned about earlier iterations’ of AI’s ability to manipulate decision making that it has already “called for the prohibition of the use of AI systems that cause or are likely to cause ‘physical or psychological’ harm through the use of ‘subliminal techniques’ or by exploiting vulnerabilities of a ‘specific group of persons due to their age, physical or mental disability.’” The World Health Organization supports this ban and in particular describes it as making informed consent impossible in the health care setting.

These warnings echo those of the U.S. military, an early adopter of AI to both enhance human decision-making and alter the behavior of our enemies.

Yet, despite these warnings of imminent danger, the U.S. has so far done nothing to regulate the use of any form of AI, let alone AI that can influence decision-making. This at least in part because of the lobbying power of those industries profiting from its use. But it also reflects the failure of U.S. law to protect the right of individuals to be free of manipulation so long as they are not being deliberately deceived.

In contrast, the EU recognizes a human right to make decisions. Although this concern over the rights of individuals affected by AI has led to non-binding principles such as the White House’s recent AI Bill of Rights, there is no general law protecting the right to make individual decisions.

But the laws protecting participants in federally-regulated research studies do provide an enforceable ban on undue influence in the decision making process. The “Common Rule” gives potential participants a right make an informed decision about whether they will or will not participate. This protection extends beyond that provided by common law for consent to medical treatment because research studies are, by definition, not conducted in the best interests of the participants but rather in an effort to obtain information. Moreover, potential participants cannot waive their right to be protected.

Drawing the line between selling a study and informing people of its benefits has always been difficult, and AI’s enhanced ability to sell products makes it even more so. But ethics review boards, often called IRBs are already required to pre-approve all materials used to advertise studies, as well as those presented directly to potential participants. AI’s ability to adapt the material it presents in a way most persuasive to individual decision makers renders such pre-review impossible. For example, it is now possible to create an informed consent conversation between a potential participant and an AI avatar, online or in person, with the vocal, visual, and even syntactical characteristics of a person who the potential participant would find the most persuasive. This could be a prominent scientist for one person or a movie star for another. Moreover, even if there is no effort to impersonate a particular individual, the interface could, in response to emotional clues, adopt a subject’s own personal characteristics, or those of someone who looks or sounds like them, or someone who they trust or admire. None of those things would be permissible in human-to-human informed consent conversations, and they should not be allowed through avatars either.

These laws requiring pre-approval of informed consent materials apply to all federally-regulated research, but the threat is particularly great in the area of clinical drug trials because the potential for profit from having a drug approved by the FDA is so large and the challenges in enrolling eligible participants so significant. One researcher explained that “recruiting the planned sample size within the defined time frame in clinical trials has proven to be the chief bottleneck in the drug development process.” In addition to the general pressure to recruit and enroll biomedically eligible participants, Sponsors are facing new pressures, including FDA Guidance, to diversify their trials by including historically under-represented populations with specific emphasis on Black participants.

Companies (Contract Research Organizations) offering their services to assist Sponsors are already promoting their use of “AI” to obtain “more diverse” populations. While the intent to remove barriers to participation is commendable, many experts are already concerned that the lack of enrollment is due to a rational distrust based on personal experience with racial discrimination in the health care system, rather than systemic exclusion While it is not possible to know what kind of AI they are using, some of the companies are direct with promises to “accelerate recruitment within underrepresented and hard-to-find patient populations, and exceed your DE&I goals.” Other advertisements for the use of AI as a recruiting tool are less specific.

Addressing the reality that new technology originally designed to sell consumer products can interfere with the informed consent process is not giving into panic. Since AI is already playing a role in recruiting research subjects it is too late to wait for federal or state legislation. Instead, the Executive Branch should act immediately as it has in the face of previous threats such as gain of function research with potentially lethal viruses. There is also precedent for the public health and research community to act on its own. This option seems especially compelling since the AI industry itself has called for a general, and probably impossible to accomplish, moratorium on further innovation.

We saw during the first two years of the pandemic that rapid change in research protocols is possible. Banning the use of persuasive AI would not affect any ongoing study, but would either stop or prevent the risk that AI could be used to undermine the process of obtaining fully informed consent for research.

This opinion piece reflects some of the content of a substantial article already accepted for publication by the University of San Diego Law Review.

Jennifer S. Bard

Jennifer S. Bard is a professor of law at the University of Cincinnati College of Law where she also holds an appointment as professor in the Department of Internal Medicine at the University of Cincinnati College of Medicine. Prior to joining the University of Cincinnati, Bard was associate vice provost for academic engagement at Texas Tech University and was the Alvin R. Allison Professor of Law and director of the Health Law and JD/MD program at Texas Tech University School of Law. From 2012 to 2013, she served as associate dean for faculty research and development at Texas Tech Law.

One thought to “AI’s Ability to Manipulate Decision Making Requires a Moratorium on Its Use in Obtaining Consent for Biomedical Research”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.