Concerted Effort to Define Responsible Use of AI in Healthcare is Sorely Needed

The following is a guest article by Amy Hester, PhD, RN, BC, FAAN, Chairwoman and CEO at HD Nursing

The growing use of artificial intelligence (AI) across a number of industries, particularly in healthcare, has shown both great promise as well as cause for alarm. 

A perfect example of not only misusing AI, but also continuing to use a model to make decisions affecting individual care that can be error-prone involves a recent class-action lawsuit filed against one of the nation’s largest health plans and its subsidiary. 

Both companies stand accused of illegally using an algorithm to deny rehabilitation care to seriously ill patients that they knew had a high error rate. A STAT investigation found that clinical employees were pressured to follow an algorithm that predicts a patient’s length of stay under the guise of denying payments to people in Medicare Advantage plans. Internal documents showed that a quota was in place to limit patient rehab stays to within 1% of the days projected by the algorithm. 

In a nascent AI marketplace that resembles America’s wild-west frontier era, it’s no wonder that AI is now appearing brightly on the proverbial radar of government officials. The Biden Administration’s recent executive order on AI, which also instructs the Department of Health and Human Services (DHHS) to establish a safety program to track unsafe healthcare practices involving this increasingly popular technology, has been heralded as a “landmark” action that’s part of a “comprehensive strategy for responsible innovation.” However, the order doesn’t go far enough to protect patient safety. 

Government’s AI Executive Order: A Step Forward or Falling Short?

President Biden sought the responsible use of AI in healthcare and the development of affordable and life-saving drugs. While it sounds reasonable, the definition of “responsible use” is not provided. In addition, the establishment of a reporting program through DHHS is completely reactive, responding only after patients already have been harmed. What needs to be focused on is defining not only responsible use but also responsible development of healthcare-based AI applications. That means also determining how AI can be used to prevent harmful and unsafe healthcare practices proactively. 

It is crucial to be responsible in the development and deployment of AI to include the determination of acceptable standards and error rates that would allow informed decisions on applying models that are ultimately used to inform care and treatment decisions.

Defining Responsible Use: Beyond Reactive Measures 

Without a more thoughtful approach to shaping the use of AI in healthcare, there could be significant fallout involving careless use of this technology that will only impede the progress of valid and useful applications. It is not a matter of if, but when providers will face repercussions in the current climate and lack of regulation in the development, evaluation, and deployment of AI.

Until standards are developed for responsible use of AI in healthcare, providers should consider running AI parallel to accepted standards of care and evidence-based practices. A key component placing providers at risk is the lack of informed consent by patients to use these models to drive their care. This is particularly important at this phase of AI enculturation into healthcare as Pew Research has shown upwards of 60% of Americans would be uncomfortable with their provider relying on AI in their own healthcare. Knowing the risk or error rate of a particular AI, informing patients, and receiving their consent could minimize legal exposure to providers.

This executive order can be used as a catalyst to spark broader discussions within healthcare that address all these concerns to achieve both business and moral imperatives. With regard to the latter, it’s also critical to weed out any bias in the data set that is used to train a particular AI model so that information isn’t skewed. Currently, healthcare has an opportunity to chart its own course in how AI evolves in our space. If we don’t take proactive measures to ensure safe practices around AI, it is foreseeable that someone else will regulate it for us.

Gaining Efficiency: AI Versus Automation 

Part of the allure of AI is gaining efficiency and accuracy in the care and treatment of patients. While AI isn’t needed to automate workflows, a thoughtful approach to leveraging information within health records is needed to eliminate duplication or use existing documentation. Existing documentation can be useful in factoring into other workflows, such as using some biophysical assessment documentation to populate fall and pressure injury risk models. Automation can increase efficiency, and in some cases, reliability. 

Another parallel to automation is considering whether a workflow even has value. In an academic medical center, for instance, physicians have already completed a patient’s history and physical by the time a patient is admitted to the floor. Often this is repeated by other providers like anesthesia before surgical admissions. All the information is there, so why does a nurse have to re-enter the patient’s history on admission? Automation could pull that information forward and the nurse could fill in any gaps that may exist.  

Integrating AI with Human Touch

Even in the absence of responsible protections against the misuse of this technology, it’s important to always integrate AI algorithms with compassionate human interaction for a better balance that improves patient outcomes. Starting with informed consent, patients should be aware that AI is being used to inform providers and their subsequent care and treatment decisions. A human touch is necessary to facilitate the appropriate and informed use of AI in the context of each individual patient. While patients may have similarities, no two patients are exactly alike. Providers must pursue that compassionate approach to uncover individual needs and differences to ensure the right decisions and care are being driven for each patient.

About Amy Hester

Amy has 25 years of nursing experience including over a decade of med/surge and neuro nursing followed by unit management and hospital administration. In 2015, she earned a Doctor of Philosophy in Nursing Science and has since published and spoken extensively on the subject of falls and injury prediction and prevention. She retired from UAMS in 2018 after 26 years of service to dedicate her time fully to HD Nursing. She is an adjunct faculty at UAMS College of Nursing. As an entrepreneur, she mentors others to help them with their own endeavors. Amy also serves as the Chair of the HD Nursing Board of Directors.

   

Categories