How algorithmic bias can exacerbate health inequity

Individual Author(s) / Organizational Author
Bunker, Paige
Publisher
Partners for Advancing Health Equity
Date
September 2023
Publication
Voices from the Collaborative Blog
Abstract / Description

In our modern day and age, health care systems and their providers are required to make decisions for millions of patients at a time and only at a moment’s notice. This is not due to the supernatural decision-making capabilities of a few policymakers or doctors. Rather, this is a result of the power of artificial intelligence (AI) and its algorithmic prediction capabilities.

We assume that AI technology, its findings, and the algorithms at its core are more objective, factual, and neutral than what could be decided by a human. However, the systems of artificial intelligence (AI) are heavily shaped by the imperfections of human-made systems. The same algorithms designed to determine patient care might be prone to reproducing and even amplifying existing structural inequalities in the healthcare system, especially when treated as neutral. This concept is known as algorithmic bias. In the context of health care, algorithmic bias is defined as any instance in which “the application of an algorithm compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability or sexual orientation to amplify them and adversely impact inequities in health systems.”

A 2019 study on the U.S. health care system by Ziad Obermeyer and his colleagues found that a widely-used algorithm for identifying those with high-risk health needs had significant racial bias. Ultimately, researchers estimated that this bias reduced the number of Black patients who were chosen for extra care by over 50%.  

How does racial bias factor into an algorithm? What steps can our health care system take to eliminate algorithmic bias and prioritize equitable care? Let’s find out.

Causes of Algorithmic Bias

Numerous causes of algorithmic bias exist, but two of the main causes are as follows: 

  1. Label choice bias: This occurs when proxy measures (variables that serve in place of immeasurable variables) are used to train algorithms, creating a gap between the predicted and target outcomes. Obermeyer’s discovery of a racially biased algorithm is an example of this kind of bias. The algorithm of interest ultimately used the faulty metric of health care costs to determine health care needs. Because systemic racism creates numerous barriers to health care access for people of color, algorithms may make the incorrect assumption that there might be lower health care costs associated with those belonging to this group.  
  2. Incomplete or unrepresentative training data: This is another main cause of algorithmic bias, and it occurs when the data set used to train an algorithm is misrepresentative of the true population. Often, these models are less accurate when assessing groups who are underrepresented in the datasets used to train them. A 2020 study found that when AI systems for thoracic disease diagnosis were programmed using gender-imbalanced data, the model’s performance declined. Systems trained on datasets that were primarily male x-rays had difficulty diagnosing females, while systems trained on primarily female data had difficulty with male diagnoses.    

How to Prevent Algorithmic Bias

Experts in this area have identified numerous preventative approaches to guard against algorithmic bias for healthcare leaders and developers:

  1. Understand and identify where bias might exist in data sets. Obermeyer and his colleagues found that the algorithm used for determining medical needs used inaccurate measures, ultimately disregarding the racial barriers to healthcare. Health care organizations should take time to have staff thoroughly analyze data sets before using them in AI models. Taking these measures will help staff identify biases to address them in modeling. It is also critical to ensure the data sets are diverse and representative of the true population.  
  2. Hire diverse teams to build AI algorithms. In the same way that data sets must be diverse to help limit bias, it is also imperative for the team of developers building these models to be just as diverse. Developers make critical decisions when selecting the features used and outputs displayed in AI systems. Having a team that is diverse in terms of expertise, gender, age, race, and more can help minimize the possibility of bringing bias into algorithmic development.  
  3. Create structures to identify and prevent future bias. Healthcare organizations must make a continuous commitment to prevent bias from affecting their AI systems. Implementing reporting channels where staff in an organization can anonymously share bias concerns is one preventative measure. Another way to prevent future algorithmic bias is by creating requirements for how AI models are documented. Ideally, healthcare organizations should mandate that all relevant AI model information is clearly documented to ensure transparency.  

Optimizing AI algorithms to remove bias is by no means the end-all solution to reaching health equity. It is critical that, as a society, we continue addressing broader problems and structural barriers within healthcare. However, addressing algorithmic bias is an important step.  

In our modern world of big data, AI predictive models aren’t going anywhere. AI models have the potential to critically assist in population health management and the equitable and efficient distribution of healthcare when implemented correctly. However, healthcare leaders should take steps to understand and mitigate system-level biases.  

Artifact Type
Reference Type
P4HE Authored
Yes
Topic Area