
Real AI. Real Consequences. Real Oversight.
These case studies analyze AI failures across healthcare, hiring, surveillance, social media, and beyond. Each one has shaped the clauses in our doctrine and tests the strength of our ethical guardrails.
Read. Reflect. Challenge them.

Case Study Library
Explore the real-world failures that shaped our doctrine.
-
Domain: Criminal Justice
Summary: COMPAS was an algorithm used to predict recidivism and inform sentencing. It disproportionately labeled Black defendants as high risk while lacking transparency in its calculations.Key Issues:
– Racial bias in outputs
– No explainability or transparency
– Use in high-stakes legal decisions without consentInformed Clauses:
6.1 – Algorithmic Bias in Public Institutions
4.2 – Oversight & Explainability Requirements -
Case Study #6: Tesla Autopilot & Overtrust in Automation
Accordion Body:
Domain: Transportation / Consumer Tech
Summary: Tesla’s Autopilot feature was marketed in a way that encouraged over trust, despite being a driver assist system. Fatal accidents followed. The ethical failure wasn’t just in the tech, but in the message.Key Issues:
– Misleading naming and marketing
– Lack of transparency on limitations
– Avoidance of regulatory accountabilityInformed Clauses:
6.8 – Ethical Communication of AI Capabilities
7.1 – Shared Responsibility Between Human & System -
Domain: Employment
Summary: Amazon trained an internal AI to evaluate résumés and it quickly started filtering out female candidates. The system mirrored historical hiring bias, and was ultimately scrapped without transparency.
Key Issues:
– Gender bias in training data
– Silent retirement of unethical system
– No audit or accountabilityInformed Clauses:
2.4 – Data Provenance & Representativeness
7.2 – Silent System Retirement Policy -
Domain: Healthcare
Summary: Touted as revolutionary, IBM Watson for Oncology made unsafe treatment recommendations and was deployed without sufficient clinical validation. Its rollout prioritized branding over patient safety.Key Issues:
– Unsafe AI recommendations in medical settings
– Misrepresentation of capabilities
– Corporate interests overriding safetyInformed Clauses:
3.1 – Clinical Deployment Standards
6.4 – Safety Over Speed in AI Rollout -
Domain: Finance
Summary: AI powered credit scoring systems determined access to loans, housing, and jobs, often using opaque data and denying applicants without explanation or recourse. These systems silently reinforced economic inequality.Key Issues:
– Black-box decision making
– No user insight or appeals process
– Algorithmic redlining and exclusionInformed Clauses:
4.1 – Right to Explanation
6.5 – AI in Financial Gatekeeping -
Domain: Education
Summary: AI powered remote proctoring tools monitored students' eye movements, facial features, and behaviors, triggering false flags and causing trauma, especially for neurodivergent, disabled, and marginalized students.Key Issues:
– Consent violations in learning environments
– Racial and ability-based false positives
– Lack of student voice or recourseInformed Clauses:
6.7 – Dignity & Consent in AI Use for Minors
4.3 – Representation Rights in Surveillance Systems -
Domain: Healthcare
Summary: A risk prediction algorithm used in hospitals gave less care to Black patients despite equal or greater need. The algorithm used past healthcare spending as a proxy for need, encoding systemic racism into triage.Key Issues:
– Bias through proxy variables
– Discriminatory care prioritization
– Hidden harms within “neutral” dataInformed Clauses:
7.1 – Algorithmic Gatekeeping in Healthcare
2.3 – Hidden Proxies & Biased Metrics -
Domain: Media / Online Platforms
Summary: Engagement maximizing algorithms on platforms like Facebook and YouTube fueled outrage, misinformation, and tribalism. Content was shaped not by truth, but by what kept users addicted.Key Issues:
– Psychological manipulation for profit
– Transparency gaps in ranking systems
– Undermining of civic discourseInformed Clauses:
6.9 – Engineered Influence & Psychological Safety
4.4 – Transparency in Content Amplification -
Domain: Government / Public Services
Summary: States used automated systems to determine eligibility for public benefits. Thousands were wrongly denied healthcare, food, or housing assistance with no clear appeals path or explanation.Key Issues:
– Lack of human oversight in life-critical systems
– Disproportionate impact on low-income families
– Zero transparency or due processInformed Clauses:
6.10 – Fairness in Automated Public Service Systems
4.1 – Right to Explanation -
Domain: Surveillance / Identity Verification
Summary: Facial recognition systems misidentified people of color at significantly higher rates and failed to detect synthetic identities altogether. These systems were used in law enforcement, airport security, and digital access, with no accountability.Key Issues:
– Racial bias in facial recognition accuracy
– Failure to distinguish synthetic or manipulated identities
– Overuse of surveillance tech in sensitive areasInformed Clauses:
6.11 – Facial Recognition & Synthetic Identity Risk
5.2 – Right to Non-Biometric Autonomy