Digital health care innovative virtual science and medicine technology concept, Doctor working with human anatomy virtual interface icons.

Tackling Racially Biased Health Care Algorithms

By Hannah Rahim

Algorithms used in health care have the potential to improve health outcomes but are susceptible to racial bias, which can have detrimental consequences for minority populations.

Federal, state, and municipal governments have taken steps towards halting the use of racially biased health care algorithms, but more comprehensive regulation and oversight is needed.

How algorithms perpetuate bias

Race is used as an input in many clinical algorithms, despite strong evidence that race is not a reliable reflection of genetic differences. The use of race as a variable can lead to undesirable effects, such as worsening health inequities and directing more resources to white patients over racial minorities.

Racial bias can also arise through other aspects of an algorithm’s design. For instance, algorithms that rely on health care spending as a proxy of health need can be problematic because some marginalized populations spend less on health care due to longstanding wealth and income disparities. Consequently, these populations are seen as having a lower need for care and thus may be disqualified from receiving extra care.

Addressing algorithmic bias at the federal level

Following a promise by the Biden Administration in 2022 to conduct an “evidence-based examination of health care algorithms and racial and ethnic disparities,” the Agency for Healthcare Research and Quality (AHRQ), began a systematic review last year.

The U.S. Food and Drug Administration (FDA) has also begun to consider the regulation of clinical algorithms. In 2021, the FDA released an action plan for the regulation of artificial intelligence and machine learning in medicine, which included supporting the development of methods to evaluate algorithms for bias. In 2022, the FDA issued a guidance document that contained recommendations for the use of clinical decision-support software but did not establish legally enforceable responsibilities.

Legally enforceable regulation is essential to create accountability towards preventing algorithmic bias. Two recently proposed rules by the Department of Health and Human Services (DHHS) are a promising starting point. First, a 2022 proposed amendment to the Affordable Care Act prevents covered entities from discriminating against any individuals on the basis of race or other protected categories through the use of clinical algorithms. Second, in April of this year, DHHS proposed a rule to govern health data and technology that would require developers of clinical decision-support algorithms to engage in practices to manage the risk of bias with publicly available information about such practices, and would enable algorithm users to review whether the algorithms were tested for fairness.

Involvement of state governments

State Attorneys General in California and D.C. have sought to prevent racial bias in algorithms through investigations and proposed legislation. California Attorney General Rob Bonta began an inquiry into racial bias in health care algorithms in September 2022 by requesting information from hospital CEOs about their use of clinical decision-making algorithms. D.C. Attorney General Karl Racine introduced in 2021, and reintroduced in 2023, the Stop Discrimination by Algorithms Act to reduce discrimination in AI decision-making tools.

Involvement of municipal governments

Municipal government interventions can also play an important role in ending the use of racially biased algorithms. For instance, the New York City Department of Health and Mental Hygiene launched the Coalition to End Racism in Clinical Algorithms (CERCA), uniting hospitals, health systems, medical schools, and independent practitioners. CERCA’s goals include raising awareness of racially biased algorithms, strengthening their members’ commitments to health equity, eliminating race correction in at least one clinical algorithm within two years, and measuring the impacts of eliminating race correction.

Next steps

Further research is needed to understand the scope of use and implications of biased health care algorithms, which in turn should inform bias mitigation strategies. In addition to ongoing AHRQ research, state Attorneys General should expand upon the approach of California and use their investigatory powers to collect relevant information from hospitals, health insurers, and algorithm developers. Medical associations, academic institutions, and research organizations should prioritize research on this issue and fund the development of more representative datasets for algorithm training and validation.

With this data, state and federal lawmakers can consider various legal strategies to stop discrimination. At the state level, racially biased algorithms might be framed as a public nuisance, as public nuisance law allows state officials to sue private companies for the negative impact of their products on public health or welfare. Public nuisance theory has been successfully used for other public health-related lawsuits relating to opioids, climate change, tobacco, handguns, water pollution, and predatory lending. At the federal level, the FDA should adopt legally enforceable standards to build upon their current recommendations.

It is also essential for health care institutions to enact policies to encourage algorithmic reform and approaches towards race-neutral alternatives. For example, the Organ Procurement & Transplantation Network approved a requirement that transplant hospitals use race-neutral calculations when estimating kidney function. Also, hospitals should establish oversight mechanisms to identify bias resulting from the algorithms they use.

Individual-level interventions educating clinicians and patients are also important. Hospitals should provide resources to patients such as easy-to-understand information about race-based algorithms and their uses, questions that patients can ask a physician relating to the use of their demographic information in algorithms, and resources to file a civil rights complaint for health care discrimination. Hospitals or algorithm developers should develop educational resources that empower clinicians to assess the validity of algorithms and substitute their own judgment if appropriate.

While existing initiatives have made progress towards ceasing the use of racially biased algorithms, further research, and legal reform are needed to counter their pernicious effects.

Hannah Rahim

Hannah is a JD/MPH student at Harvard Law School and Harvard T.H. Chan School of Public Health (Class of 2025). Her research explores legal and policy strategies to prevent discrimination against persons who use drugs who are seeking organ transplantation. She has previously published research papers on transplant immunology, birth tourism, and global COVID-19 seroprevalence. Hannah is also the co-President of the Harvard Health Law Society.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.