'Recognize it, monitor it, audit it': Taking action to avoid biased healthcare AI

During a session at HIMSS22, panelists discuss how healthcare organizations can build and maintain unbiased AI.
By Emily Olsen
09:56 am
Share

Moderator Shelley Price with Shannon Harris, Jaclyn Sanchez and Carolyn Sun

Photo: Emily Olsen/MobiHealthNews

ORLANDO, Fla. – As artificial intelligence moves into more spaces in healthcare, organizations need to take action to ensure their algorithms aren't creating more bias, adding to significant health inequities.

It can be a daunting task. One way to start is to ensure people from under-represented backgrounds are at the table from the start, said Carolyn Sun, assistant professor at Hunter College and associate research scientist at the Columbia University School of Nursing.

"There have been programs created to help young women of color or girls in general to become coders and become part of the workforce of health IT," she said at a panel at HIMSS22. "And I think that's a really important way to do it, to start somewhere way back instead of just looking at [it] like, 'Okay, here's the outcome. This isn't quite right. How do we fix it?' Maybe we need to even further step back and dig a little deeper."

But it's also necessary to evaluate the effectiveness of those diversity, equity and inclusion programs (DEI). There are plenty of DEI initiatives in healthcare organizations, said Shannon Harris, assistant professor in the school of business at Virginia Commonwealth University. But are they just a box being checked? How can workers weigh in if they see a potential problem?

"If you're saying, 'Oh, well, there's no way that they're going to understand what we're doing.' Well, why not? Why can't they? Shouldn't we be able to, in some way, have people understand what's going on in the organization to the point where we can understand where things may need to be adjusted," Harris said. 

It's necessary to consider factors like race, socioeconomic status and gender when using AI, but also be aware of how the algorithm will interpret that information.

"For example, for women who are pregnant who are thinking of having a vaginal birth after cesarean. The algorithm that healthcare providers use already adds extra risk to a Black or Hispanic woman. The thinking is that they're less successful based on historical data to have a successful vaginal birth after cesarean," Sun said. "But in fact, by doing that, we're putting more Black and Latino women into this situation where they're getting a C-section that a white woman may not get."

Harris' research focuses on appointment scheduling. Patients with higher no-show rates were put after or into overbooked slots to maximize clinic efficiency. As a result, in their population, Black patients ended up waiting in the clinic longer than non-Black patients.

"There wasn't any sort of magical solution where we didn't explicitly say, 'Our data are racially biased, which means our optimization needs to be race-aware, so it can eliminate that bias.' And that can be very difficult, right?" she said. 

The solution is not to throw out your data, said Jaclyn Sanchez, senior director of information technology at Planned Parenthood of Southwest and Central Florida. But you have to keep monitoring your output and make changes when necessary.

"Make your AI adaptable. … Make your AI or algorithms answer the questions in the way that you want to answer the questions, not how we currently answer the questions," she said. "So, make your AI adaptable to change, smart. Learn from it, and it's okay to be wrong. Recognize it, monitor it, audit it."

HIMSS22 Coverage

An inside look at the innovation, education, technology, networking and key events at the HIMSS22 Global Conference & Exhibition in Orlando.

Share