The Ethical Implications of AI and Data in Healthcare

By Michael Armstrong, Chief Technology Officer, Authenticx
Twitter: @be_authenticx

From patient communication to cancer detection, artificial intelligence’s (AI’s) role across healthcare is becoming increasingly more prominent. Although its implementation increases the effectiveness and efficiency of healthcare, AI also raises patient privacy concerns. In fact, healthcare breaches were at an all-time high in 2021, affecting 45 million people, according to Critical Insights’ 2022 Healthcare Data Breach Report. To safeguard patient information and communications, healthcare organizations must build a foundation of trust, compliance and transparency.

AI’s relationship to data

Global data breaches have reinvigorated the focus on healthcare data’s security and privacy risks as has the interest in current protections in place. With AI-powered products helping to facilitate the exchange of medical information between patients and medical team members, healthcare organizations must take all steps to protect individual information and privacy.

Federal and state regulations and laws, such as the Health Insurance Portability and Accountability Act (HIPAA) govern the data that healthcare organizations collect and use. However, HIPAA has its shortcomings because the legislation originated well before AI and ML’s wider implementation.

AI’s challenges

As the healthcare industry seeks new use cases for AI and ML, it must also identify and address privacy and security challenges these tools potentially introduce.

Biases
While AI itself is ethically neutral, cognitive and algorithmic biases can influence its efficacy. These biases can lead to harmful patient outcomes and differential treatment.

Human programmers unintentionally introduce biases into AI’s algorithms. And because AI requires massive amounts of data, using incomplete training datasets lacking representation of an entire population also allows biases to creep in.

The most effective approach to avoiding — or at least significantly reducing — AI bias requires evaluating data and algorithms and leveraging best practices during the collection, utilization and creation of those AI algorithms. Healthcare organizations should test algorithms in real-life settings, account for “counterfactual fairness” and adopt a continuous feedback loop (human in the loop) where humans provide consistent feedback from which the AI (and its algorithms) learn.

Patient privacy
Healthcare organizations use data anonymized through a de-identification process. But the end result may look different depending on the data and its intended use. De-identification removes any direct patient identifiers so data can be shared without compromising privacy.

As AI products evolve, data volume increases, and new data elements are added to the AI system. While developers may use de-identified data to address potential biases, adding new data also increases the probability of accidentally introducing identifiable data. Organizations should continually assess risks and potential impacts of their AI systems and the identifiable data they produce.

Vendor due diligence
Healthcare organizations regularly conduct their due diligence into how vendors collect and store data because technology reliant on protected health information (PHI) has become a hot target for cybercriminals. Security breaches exposing PHI carry significant financial, legal and reputational consequences for organizations. In the healthcare sector, a data breach costs an organization over $10 million on average.

Solutions to AI Risks

While AI does carry risks, the healthcare industry can’t discount its value. To mitigate those risks, healthcare organizations should:

  • Establish comprehensive risk management programs to prevent data breaches, putting processes and procedures in place to vet third-party vendors before granting them access to data.
  • Implement enhanced compliance monitoring including regular audits to quickly identify and address any compromised data.
  • Enforce strict controls over data access.
  • Train personnel and vendors on access limits, data use limits, security obligations and limitations in patient consent and authorization forms.
  • Emphasize patient agency and consent. For example, organizations should require informed consent for new uses of data and clearly communicate patients’ right to withdraw data.
  • Address the de-identification issues with new, improved forms of data protection and anonymization. Focus on innovation and the regulatory component requiring healthcare organizations to use cutting-edge, effective methods for privacy protection.

As the healthcare industry increases its reliance on AI, stakeholders must take steps to diminish its associated risks. Developers must work to prevent bias in their algorithms, regulators must develop actionable guidelines to protect patient data, and decision-makers at all levels must evaluate the safety and security of each AI-enhanced tool.