How AI Can Advance Health Equity

The following is a guest article by Sachin Patel, CEO at Apixio.

The use of AI in healthcare is gaining fast traction, with the total market expected to grow over 46% CAGR, reaching $96 billion by 2028. While AI has the potential to improve health equity, intrinsic and extrinsic biases that exist from the lack of diverse data exacerbate the problem. Furthermore, the closer an activity is to the point of care, the greater the risk of unintended biases in clinical decision-making.

For example, an AI-powered system to predict the likelihood of hospitalization to avoid in-patient admission can help target interventions, prevent adverse outcomes, and reduce healthcare costs. However, the challenge for using the hospitalization models is that training data from high utilization members does not include data from members affected by racial or socioeconomic disparities.

While racial and ethnic biases are important areas of concern, geographic or location-based biases are also problematic and may be an equally good proxy for addressing factors for which clean data is more challenging to capture. Healthcare AI developers can limit biases by ensuring quality and diverse data for training AI models before implementing them on a large scale.

Here are three examples of biases to look out for in AI solutions and how to curtail or prevent them altogether.

  1. Inadequate data. In some parts of the country, there are significant discrepancies between geographic and population coverage. For example, rural west Texas covers a large geographic area but is only a small population with limited access to care. That means even data sets that cover large areas may not have enough volume of data to make accurate predictions about diseases, conditions, and patient risks, which would otherwise be easy to identify in more densely populated areas.

AI developers may need to supplement the data with similar populations’ data to account for this. For example, utilizing data from rural Oklahoma or Nebraska with comparable demographic characteristics could increase the training population size to a level that allows for high confidence in predictions. Similar situations could arise around specific medical conditions where clustering around common environments for physically distant regions could provide higher volumes of data.

  1. Health coverage program. Is the algorithm in question used for Medicare, Medicaid, or ACA populations? Depending on the covered groups, data scientists must consider socioeconomic factors. While the standard of care sits in the clinical realm, AI support in clinical decisions can generally benefit from ensuring that unintended biases embedded within a patient’s coverage program are not playing a role. Without considering the coverage program, the data model may be biased toward the largest population trends and overlook the needs and concerns of the segment at hand.
  2. Variety of data. If the algorithm only relies on one data source, such as insurance claims, it is likely to miss out on various factors that provide key insights. For example, if an analytics engine only uses claims, the predictions will not be as accurate relative to an engine that uses multiple data types. By incorporating unstructured data like clinical notes and forms, a wealth of additional information would be available to improve the quality of a prediction significantly. Another example would be ensuring that newer forms of data – wearable and implantable medical devices, for example – can be easily incorporated where relevant, such as remote ECG monitoring for cardiac patients.

As much as AI can inform medical decisions, it is important to realize that the ultimate decision rests with the provider. Therefore, to serve as a worthy supplement, AI must earn the trust of providers for use alongside their practice of medicine – removing bias is essential to building that trust.

When used for clinical decision support, AI has the promise to improve care delivery, reduce clinical burnout, and ultimately drive better patient outcomes. However, this promise comes along with great responsibility to minimize systemic errors. When building healthcare AI models, these three biases should be top of mind and keep to the essence of “First, do no harm.”

About Sachin Patel

Sachin Patel joined Apixio in 2017 as Chief Financial Officer and later served as President and Chief Financial Officer before taking his current role as Chief Executive Officer. Sachin has extensive experience working with value-based care provider groups including Vantage Oncology, where he served as Vice President, Finance, and Chief Financial Officer of Vantage Cancer Care Network. Sachin has also held positions with Citigroup Investment Banking and began his career in engineering roles with Cisco and IBM. Sachin holds a BS in Electrical Engineering from The University of Texas at Austin and an MBA from the UCLA Anderson School of Management.

   

Categories