When AI Denies Care: Bias in Medical Resource Allocation
Healthcare systems increasingly use algorithms to make critical decisions about patient care:
A widely-used commercial algorithm (used by major health systems across the U.S.) was designed to predict which patients would benefit most from "high-risk care management programs." These programs provide:
Scale: This algorithm affected healthcare decisions for approximately 200 million people in the United States.
The algorithm analyzed patient data to predict future healthcare needs and assign risk scores. Patients with high risk scores would be enrolled in programs to help manage their conditions and prevent serious health problems.
Researchers from UC Berkeley, University of Chicago, and Partners HealthCare published a landmark study in the journal *Science* revealing massive racial bias in this healthcare algorithm.
At a given risk score, Black patients were significantly sicker than white patients:
Because Black patients received lower risk scores despite being sicker:
The researchers calculated that fixing this bias would increase the number of Black patients receiving extra care by 46.5%.
Researchers estimate this algorithm reduced the number of Black patients identified for extra care by more than half. Given that it affected 200 million people, millions of Black patients were denied care they needed.
The algorithm predicted healthcare needs by looking at past healthcare spending, not actual health status. The logic was:
This logic breaks down when different groups have unequal access to healthcare.
Black patients in the United States face multiple barriers to healthcare access:
Result: Black patients access less healthcare even when they're equally or more sick than white patients.
Therefore: When the algorithm looked at spending data, it saw lower spending for Black patients and incorrectly concluded they were healthier, when in reality they had less access to care.
The algorithm turned a symptom of inequality (unequal access) into a cause of more inequality (unequal care).
You might wonder: Why didn't they just measure actual health instead of cost? Reasons include:
But "easy to measure" doesn't mean "accurate" or "fair."
Millions of Black patients who needed extra support for chronic conditions like diabetes, heart disease, and kidney problems didn't receive it because the algorithm underestimated their needs.
Without care management programs, patients' conditions worsened:
Patients and families faced:
This algorithm amplified existing racial disparities in healthcare outcomes:
When healthcare systems use biased algorithms, it further erodes trust in medical institutions within Black communities—trust that is already fragile due to historical abuses.
Because this algorithm was used across major health systems affecting 200 million people:
When the bias was revealed:
Using cost as a proxy for health seemed reasonable but encoded racial disparities in healthcare access into the algorithm's predictions.
Cost data was convenient but inappropriate. The most accessible data isn't always the best choice.
The algorithm learned from data reflecting centuries of healthcare discrimination and perpetuated that discrimination into the future.
Black patients already faced healthcare barriers; the algorithm made those barriers worse by denying them programs designed to help.
Understanding healthcare disparities and their causes was essential to recognizing this bias. Technical expertise alone wasn't enough.
When algorithms affect access to healthcare—literally life and death—the burden of proof should be on developers to demonstrate fairness before deployment.
After researchers identified the problem, the bias was significantly reduced. This shows that awareness and action can make a difference.
Explain in your own words why using healthcare costs to predict health needs created racial bias. Why did lower spending not mean better health?
Hint: Think about barriers to healthcare access.
This algorithm created a "vicious cycle" or "feedback loop." Draw a diagram showing how unequal access to care fed into the algorithm, which then created more unequal access.
This isn't about getting a lower score on a product recommendation or seeing different ads—it's about access to medical care. How does the high-stakes nature of healthcare affect how we should think about using AI?
How is this case similar to and different from the COMPAS case (criminal justice) you studied? What patterns do you notice across different types of algorithmic bias?
If you were redesigning this algorithm, what would you measure instead of cost? What challenges might you face in implementing your solution?
Millions of people were affected by this biased algorithm. Who should be held responsible? The algorithm developers? The hospitals that used it? Both? Neither? What should the consequences be?