Cover image for case study #7 on healthcare bias and algorithmic gatekeeping, featuring silhouettes of a person, medical symbols, a computer monitor with declining trend, and a certificate with a ribbon.

Case Study 7: Healthcare Bias & Algorithmic Gatekeeping

An ethical breakdown of racial disparities in healthcare algorithms and the risk of invisible discrimination

Background

In 2019, a Science study exposed racial bias in a widely used healthcare risk prediction algorithm. The tool aimed to identify patients needing extra care but ranked Black patients lower‑risk than white patients with the same conditions.

The flaw stemmed from using past healthcare spending as a proxy for health needs Black patients had lower medical costs due to systemic barriers, not better health. This algorithm impacted millions across major hospital systems.

Ethical Concerns

  • Proxy Discrimination: The model used cost as a proxy, encoding systemic inequities and undermining equitable care.
  • Silent Bias at Scale: Doctors trusted the algorithm without knowing its flaw, spreading its bias quietly through routine care.
  • No Accountability or Appeals: Patients had no visibility into their scores and no way to contest them, no audits, no transparency, no recourse.

By the Numbers

  • More than 200 million patients affected across hospital networks.
  • Black patients were 48 percent less likely than equally sick white patients to be referred for extra care.
  • Removing cost as a proxy reduced about 84 percent of the biasp, proof that the flaw can be fixed once identified.
  • → Scientific American: Risk Scores and Racial Bias

Ethical Reflection

This case shows how neutral math can reinforce injustice when trained on unfair systems. What made this worse was how quietly it happened patients didn’t know, providers didn’t know, the system just decided.

Bias in, bias out, unless we audit, surface, and correct.

Silent gatekeeping at scale is the invisible enemy of equity.

Clause Tie‑In (from Doctrine)

Clause 7.1: Algorithmic Gatekeeping in Healthcare

Any AI affecting care prioritization must be audited regularly for bias, fully transparent in logic, and allow patient recourse. Algorithmic gatekeeping can never override the right to equitable treatment.

Related Resources

← Back to Case Study Library