How to Mitigate Ethical Challenges of AI-Driven Healthcare

The following is a guest article by Natalie C. Oehlers, Associate Attorney and Carly C. Koza, Healthcare Associate Attorney at Buchanan Ingersoll & Rooney.

While various legislation has been implemented or proposed at both the federal and state levels to standardize and accelerate the development of artificial intelligence (AI), most stop short of addressing the significant ethical challenges that may arise. Now, more than ever, it is important for developers, manufacturers, and healthcare providers to remain vigilant as it relates to ethics surrounding AI. 

 How AI Improves the Healthcare Industry 

AI is a powerful tool that has the potential to help solve some of the biggest challenges facing the healthcare industry today while also ensuring quality patient care, reducing costs and reducing potential liability for practitioners.  

Administrative tasks such as records retention and maintenance, pre-authorizing insurance and bill processing and follow-up take up a significant portion of time that could otherwise be allocated to patient care. By using AI systems, such as advanced AI billing software that automates the claims review process, these administrative burdens are reduced. 

Another use of AI in the healthcare industry surrounds data analytics. Data analytics can be broken down into four distinct categories  diagnostic, descriptive, prescriptive and predictive  all of which analyze both current and historical data to predict certain outcomes as they relate to an individual. These analytical uses also expand to the population at large to predict future health-related outcomes by assessing broad, influential, community-related factors in addition to the current and historical data. 

Data analytics is also used in patient care. For example, data analytics and clinical decision support software are used to enhance the decision-making process by providing treatment recommendations based upon patient data, reference materials, clinical guidelines and medical journals. This in turn helps identify the nature of the medical condition, and which patients are at risk of falls, hospital readmissions and other health conditions that may require intervention. Additionally, data analytics and image-recognition software are used to identify arrhythmias in EKGs, and even detect malignant nodules in CT scans. 

Regardless of the uses of AI in the healthcare industry that can result in optimized staffing, improved cost management, increased efficiency as it relates to accounts receivables and bill processing, and reduced burnout, ethical concerns must still be taken into consideration to ensure quality of patient care. 

Ethical Challenges that Come with Using AI in the Healthcare Industry 

The use of AI is revolutionizing the delivery of healthcare, but its use comes with significant ethical challenges that cannot be ignored, the most important of which involve bias and informed consent.  

Bias is prominent at both interpersonal and institutional levels in society today, even in the world of AI. The healthcare industry is no different as the AI systems used are often built using biased data that is not representative of the patient population at large, which results in biased outcomes. For example, a 2019 study found a risk-prediction algorithm ultimately demonstrated racial bias because it relied on previous patients’ health care spending as a proxy for determining future risks and need for extra care while simultaneously ignoring social inequities.  

Additionally, pre-existing bias of the programmers developing the algorithms used in AI systems, regardless of whether it is intentional or unintentional, often leads to further algorithmic and data-driven bias from AI itself. This is only exacerbated when considering the limitation of the programing and computational power of AI, which often restricts the AI system’s ability to consider new knowledge (e.g., new drugs or medical breakthroughs, changes in regulation or legislation, etc.), thus presenting significant issues relating to technical and emergent bias.  

AI, if created and used without taking bias into consideration, may result in poor or delayed treatment and inaccurate diagnoses, especially for historically underrepresented populations. In short, biased data leads to continued biased outcomes. As such, it is crucial for developers and users of AI to recognize the potential for bias in the development and use of AI and respond accordingly. Developers must also gather appropriate data from the population at large instead of using historical data or common benchmarking shown to have pre-existing bias. 

Another major ethical concern of AI use surrounds the principle of informed consent. Informed consent is a process in which practitioners must share the risks and benefits of proposed treatment to allow patients to make well-informed decisions about their care. As AI is constantly improving, a practitioner must disclose the ever-evolving nature of the technology being used, explain how the AI system works and share the level at which the technology was relied upon when determining courses of treatment. The physician should also address the potential issues that may arise related to confidentiality and privacy as patients have a right to know how their protected health information is being disseminated for research, diagnostic and treatment purposes. Moreover, it is vitally important that practitioners continuously seek new training and educational opportunities relating to AI to ensure compliance with the various requirements of their profession. 

Overall Recommendations 

If used appropriately, AI can help ensure quality patient care while also reducing potential liability for providers. Nevertheless, there are significant ethical challenges that may arise with its use. As such, it is imperative for developers, manufacturers and healthcare providers to recognize and mitigate these challenges in every way possible. This includes improving data sets and training models, reducing bias, improving transparency as to the risks and benefits of the proposed treatment and AI’s role in treatment, and improving privacy and security of the data used and collected. Moreover, it is critical that practitioners use AI systems as tools in their decision-making process as it relates to treatment decisions, not sole determinants for care.  

About Natalie C. Oehlers

Natalie is an Associate Attorney at Buchanan Ingersoll & Rooney in the FDA, Biotechnology and Life Sciences practice group located in Washington, D.C. 

 

About Carly C. Koza

Carly is a graduate of Case Western Reserve University School of Law. She is a Healthcare Associate Attorney at Buchanan Ingersoll & Rooney in Pittsburgh, Pennsylvania. 

   

Categories