Case Study 4

Healthcare Algorithm Racial Bias

When AI Denies Care: Bias in Medical Resource Allocation

Background

AI in Healthcare

Healthcare systems increasingly use algorithms to make critical decisions about patient care:

  • Predicting which patients need extra help or monitoring
  • Allocating scarce medical resources
  • Deciding who qualifies for special programs
  • Identifying high-risk patients
  • Determining care pathways

The Specific Algorithm

A widely-used commercial algorithm (used by major health systems across the U.S.) was designed to predict which patients would benefit most from "high-risk care management programs." These programs provide:

  • Extra doctor visits and monitoring
  • Care coordination services
  • Disease management support
  • Preventive care and education

Scale: This algorithm affected healthcare decisions for approximately 200 million people in the United States.

How It Was Supposed to Work

The algorithm analyzed patient data to predict future healthcare needs and assign risk scores. Patients with high risk scores would be enrolled in programs to help manage their conditions and prevent serious health problems.

The Problem: Systemic Racial Bias

What Researchers Discovered (2019)

Researchers from UC Berkeley, University of Chicago, and Partners HealthCare published a landmark study in the journal *Science* revealing massive racial bias in this healthcare algorithm.

The Shocking Statistics

At a given risk score, Black patients were significantly sicker than white patients:

  • Black patients had 26% more chronic conditions than white patients with the same risk score
  • Black patients with the same score had more diabetes, anemia, kidney failure, and high blood pressure
  • To receive the same risk score as a white patient, a Black patient had to be considerably sicker

Impact on Program Access

Because Black patients received lower risk scores despite being sicker:

  • They were less likely to be enrolled in care management programs
  • They received less medical attention and resources
  • Their health problems went unmanaged

The researchers calculated that fixing this bias would increase the number of Black patients receiving extra care by 46.5%.

The Scale of Harm

Researchers estimate this algorithm reduced the number of Black patients identified for extra care by more than half. Given that it affected 200 million people, millions of Black patients were denied care they needed.

How Did Bias Enter the System?

The Fatal Flaw: Using Cost as a Proxy for Health

The Algorithm's Approach

The algorithm predicted healthcare needs by looking at past healthcare spending, not actual health status. The logic was:

  1. Sicker patients cost more money
  2. Therefore, high spending = high health needs
  3. Therefore, we can predict health needs by looking at costs

This logic breaks down when different groups have unequal access to healthcare.

Why This Created Racial Bias

Black patients in the United States face multiple barriers to healthcare access:

  • Insurance Gaps: Black Americans are more likely to be uninsured or underinsured
  • Geographic Barriers: Less access to nearby doctors and hospitals
  • Economic Barriers: Can't afford copays, prescriptions, or time off work
  • Distrust: Historical medical racism (Tuskegee experiment, forced sterilizations) creates warranted distrust
  • Discrimination: Ongoing bias in medical treatment (studies show Black patients' pain is undertreated)

Result: Black patients access less healthcare even when they're equally or more sick than white patients.

Therefore: When the algorithm looked at spending data, it saw lower spending for Black patients and incorrectly concluded they were healthier, when in reality they had less access to care.

The Vicious Cycle

  1. Black patients face barriers to healthcare access
  2. They spend less on healthcare (not by choice, but due to barriers)
  3. Algorithm sees lower spending and assigns lower risk scores
  4. Lower scores mean less enrollment in helpful programs
  5. Less access to programs means continuing health disparities
  6. Continuing disparities feed back into the data

The algorithm turned a symptom of inequality (unequal access) into a cause of more inequality (unequal care).

Why Use Cost Instead of Health?

You might wonder: Why didn't they just measure actual health instead of cost? Reasons include:

  • Cost data is easy to collect (it's in billing systems)
  • Health data is harder to standardize and compare
  • Many measures of health aren't consistently recorded
  • Cost seemed like a "neutral" metric

But "easy to measure" doesn't mean "accurate" or "fair."

Real-World Impact

Who Was Harmed?

Denied Preventive Care

Millions of Black patients who needed extra support for chronic conditions like diabetes, heart disease, and kidney problems didn't receive it because the algorithm underestimated their needs.

Health Deterioration

Without care management programs, patients' conditions worsened:

  • Diabetes complications (vision loss, amputations, kidney failure)
  • Heart attacks and strokes
  • Preventable hospitalizations
  • Shorter life expectancy

Economic Consequences

Patients and families faced:

  • Higher medical bills from preventable emergencies
  • Lost work and income due to illness
  • Caregiver burden on family members

Perpetuating Health Inequities

This algorithm amplified existing racial disparities in healthcare outcomes:

  • Black Americans already have higher rates of diabetes, hypertension, and heart disease
  • Black Americans already have shorter life expectancy
  • The algorithm made these disparities worse by denying care to those who most needed it

Loss of Trust

When healthcare systems use biased algorithms, it further erodes trust in medical institutions within Black communities—trust that is already fragile due to historical abuses.

Scale of Impact

Because this algorithm was used across major health systems affecting 200 million people:

  • Millions of Black patients were denied appropriate care
  • Countless preventable health crises occurred
  • Lives were shortened
  • Healthcare disparities widened

What Could Have Been Done Differently?

Better Metrics:

  • Use Health Outcomes, Not Costs: Measure actual health status (number of chronic conditions, test results, severity of illness) instead of spending
  • Multiple Indicators: Use combination of clinical measures, not a single proxy variable
  • Account for Access Barriers: Adjust for known disparities in healthcare access

Rigorous Testing:

  • Disaggregated Analysis: Test algorithm performance separately for Black and white patients before deployment
  • Equity Audits: Explicitly examine whether algorithm perpetuates or reduces health disparities
  • Clinical Validation: Have doctors review whether algorithm's risk assessments match clinical reality

Diverse Perspectives:

  • Include Health Equity Experts: Involve people who understand healthcare disparities in algorithm design
  • Community Input: Get feedback from affected communities
  • Diverse Development Teams: Include people from diverse racial backgrounds who might spot problems

Accountability:

  • Transparency: Healthcare systems should disclose use of algorithms and how they work
  • Regular Audits: Continuously monitor for disparate impact
  • Correction Mechanisms: Process to fix bias when discovered
  • Patient Rights: Ability to challenge algorithmic decisions

What Happened After Discovery:

When the bias was revealed:

  • The algorithm developer worked with researchers to reduce bias
  • They developed a new version that cut racial bias by 84%
  • However, many health systems were still using the biased version
  • Questions remain about other similar algorithms

Key Lessons Learned

Proxy Variables Can Hide Bias

Using cost as a proxy for health seemed reasonable but encoded racial disparities in healthcare access into the algorithm's predictions.

"Easy to Measure" ≠ "Right to Measure"

Cost data was convenient but inappropriate. The most accessible data isn't always the best choice.

Historical Inequality Becomes Algorithmic Inequality

The algorithm learned from data reflecting centuries of healthcare discrimination and perpetuated that discrimination into the future.

AI Can Amplify Existing Injustice

Black patients already faced healthcare barriers; the algorithm made those barriers worse by denying them programs designed to help.

Context Is Essential

Understanding healthcare disparities and their causes was essential to recognizing this bias. Technical expertise alone wasn't enough.

High Stakes Demand High Standards

When algorithms affect access to healthcare—literally life and death—the burden of proof should be on developers to demonstrate fairness before deployment.

Bias Can Be Fixed

After researchers identified the problem, the bias was significantly reduced. This shows that awareness and action can make a difference.

Discussion Guide

Small Group Discussion Questions

Question 1: Understanding the Root Cause

Explain in your own words why using healthcare costs to predict health needs created racial bias. Why did lower spending not mean better health?

Hint: Think about barriers to healthcare access.

Question 2: The Vicious Cycle

This algorithm created a "vicious cycle" or "feedback loop." Draw a diagram showing how unequal access to care fed into the algorithm, which then created more unequal access.

Question 3: Life and Death Stakes

This isn't about getting a lower score on a product recommendation or seeing different ads—it's about access to medical care. How does the high-stakes nature of healthcare affect how we should think about using AI?

Question 4: Comparing to Other Cases

How is this case similar to and different from the COMPAS case (criminal justice) you studied? What patterns do you notice across different types of algorithmic bias?

Question 5: Solutions

If you were redesigning this algorithm, what would you measure instead of cost? What challenges might you face in implementing your solution?

Question 6: Accountability

Millions of people were affected by this biased algorithm. Who should be held responsible? The algorithm developers? The hospitals that used it? Both? Neither? What should the consequences be?

Whole Class Discussion

  • Why do you think this bias wasn't caught before the algorithm was deployed to affect 200 million people?
  • How does this case connect to broader issues of racial inequality in healthcare and American society?
  • Should patients have the right to know when algorithms are being used to make decisions about their care? Should they be able to opt out?
  • What role does historical medical racism (like the Tuskegee experiment) play in how Black communities experience healthcare and trust medical systems?
  • After researchers found the bias, the developer reduced it by 84%. Is that good enough? Should it be 100% eliminated?

Additional Resources

Primary Sources

  • Obermeyer et al., "Dissecting racial bias in an algorithm used to manage the health of populations" - Science, October 2019

Related Reading

  • "Medical Apartheid" by Harriet A. Washington (historical context on medical racism)
  • "The Immortal Life of Henrietta Lacks" by Rebecca Skloot
  • CDC information on health disparities

Videos and Articles

  • NPR: "A Look Into The Health Disparities That Affect Millions Of Americans"
  • Washington Post: "Racial bias in a medical algorithm favors white patients over sicker black patients"