AI has come to healthcare: What are the pitfalls and opportunities?

The Mayo Clinic's Muhammad Babur discusses the potential, ethics and challenges of using AI in healthcare.
By Laura Lovett
02:18 pm
Share

Photo: Gerber86/Getty Images 

​​From self-driving cars to virtual travel agents, artificial intelligence has quickly transformed the landscape for nearly every industry. The technology is also employed in healthcare to help with clinical decision support, imaging and triage. 

However, using AI in a healthcare setting poses a unique set of ethical and logistical challenges. MobiHealthNews asked health tech vet Muhammad Babur, a program manager at the Mayo Clinic, about the potential challenges and ethics behind using AI in healthcare ahead of his upcoming discussion at HIMSS22.

MobiHealthNews: What are some of the challenges to using AI in healthcare?

Babur: The challenges that we face in healthcare are unique and more consequential. It’s not only that the nature of healthcare data is more complex, but ethical and legal challenges are more complex and diverse. As we all know, artificial intelligence has the huge potential to transform how healthcare is delivered. However, AI algorithms depend on large amounts of data from various sources such as electronic health records, clinical trials, pharmacy records, readmission rates, insurance claims records and heath fitness applications. 

The collection of this data poses privacy and security challenges for patients and hospitals. As healthcare providers, we cannot allow unchecked AI algorithms to access and analyze huge amounts of data at the expense of patient privacy. We know the application of artificial intelligence has tremendous potential as a tool for improving safety standards, creating robust clinical decision-support systems and helping in establishing a fair clinical governance system.

But at the same time, AI systems without proper safeguards could pose a threat and immense challenges to the privacy of patient data and potentially introduce biases and inequality to a certain demographic of the patient population.

Healthcare organizations need to have an adequate governance structure around AI applications. They also utilize only high-quality datasets and establish provider engagement early in the AI algorithm development.

Additionally, it is critical for healthcare institutions to develop a proper process for data processing and algorithm development and put in place effective privacy safeguards to minimize and reduce threats to safety standards and patient data security. ….

MobiHealthNews: Do you think that health is held to different standards than other industries using AI (for example, the auto and financial industries)?

Barbur: Yes, healthcare organizations are held to different standards than other industries because the wrong use of AI in healthcare could cause potential harm to patients and certain demographics. AI could also help or hinder tackling health disparities and inequities in various parts of the globe.

Additionally, as AI is being utilized more and more in healthcare, there are questions on boundaries between the physician’s and machine’s role in patient care, and how to deliver AI-driven solutions to the broader patient population.

Because of all these challenges and the potential for improving the health of millions of people around the world, we need to have more stringent safeguards, standards and governance structures around implementing AI for patient care. 

Any healthcare organization using AI in a patient care setting or clinical research needs to understand and mitigate ethical and moral issues around AI as well. As more healthcare organizations are adopting and applying AI in their day-to-day clinical practice, we are witnessing a larger number of healthcare organizations adopting codes of AI ethics and standards.

However, there are many challenges in adopting a fair AI in healthcare settings. We know AI algorithms could provide input in critical clinical decisions, such as who will get the lung or kidney transplant and who will not.

Healthcare organizations have been using AI techniques to predict the survival rate in kidney and other organ transplantation. According to a recently published study that looked into AI algorithms, which have been used to prioritize which patients for kidney transplants, found the AI algorithm discriminated against black patients:

“One-third of Black patients … would have been placed into a more severe category of kidney disease if their kidney function had been estimated using the same formula as for white patients.”

These kinds of findings pose a big ethical challenge and moral dilemma for healthcare organizations that are unique and different than let’s say for a financial or entertainment industry. The need to adopt and implement safeguards for fairer and more equitable AI is more urgent than ever. Many organizations are taking a lead in establishing oversight and strict standards for implementing unbiased AI.

MobiHealthNews: What are some of the legal and ethical ramifications of using AI in healthcare?

Barbur: The application of AI in healthcare poses many familiar and not-so-familiar legal issues for healthcare organizations, such as statutory, regulatory and Intellectual property. Depending on how AI is used in healthcare, there may be a need for FDA approval or state and federal registration, and compliance with labor laws. There may be reimbursement questions, such as will federal and state health care programs pay for AI-driven health services? There are contractual issues as well, in addition to antitrust, employment and labor laws that could impact AI.

In a nutshell, AI could impact all aspects of revenue cycle management, and have broader legal ramifications. Additionally, AI certainly has ethical consequences for healthcare organizations. AI technology may inherit human biases due to biases in training data. The challenge of course is to improve fairness without sacrificing performance. 

There are many numbers of biases in data collection such as response or activity bias, selection bias, and societal bias. These biases in data collection could pose legal and ethical challenges for healthcare.

Hospitals and other healthcare organizations could work together in establishing common responsible processes that can mitigate bias. More training is needed for data scientists and AI experts on reducing the potential human biases and developing algorithms where humans and machines can work together to mitigate bias.

We must have "human-in-the-loop" systems to get human recommendations and suggestions during AI development. Finally, Explainable AI is critical to fix biases. According to Google, “Explainable AI is a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models. With it, you can debug and improve model performance, and help others understand your models' behavior.”

Applying all these techniques and properly educating AI scientists on debiasing AI algorithms are keys to mitigating and reducing biases.

The HIMSS22 session "Ethical AI for Digital Health: Tools, Principles & Framework" will take place on Thursday, March 17, from 1 p.m. to 2 p.m. in Orange County Convention Center W414A.

HIMSS22 Coverage

An inside look at the innovation, education, technology, networking and key events at the HIMSS22 Global Conference & Exhibition in Orlando.

Share