
Case Study 10: Synthetic Identities and Facial Recognition Surveillance
An ethical investigation into the rising risks of identity manipulation and biometric surveillance in public and private systems
Background
As facial recognition AI spreads globally, a new risk has emerged: synthetic identities. These are AI generated faces and personas used to fool verification systems and avoid accountability. At the same time, governments and corporations are deploying facial recognition surveillance with little transparency, oversight, or consent.
From Clearview AI's massive image scraping to wrongful arrests based on misidentification, the threats affect both individuals and society at large. These technologies can target innocent people or allow criminals to operate behind digital masks.
Ethical Concerns
- False Positives and Wrongful Accusations: Facial recognition systems have led to multiple wrongful arrests, especially of Black men. Accuracy varies across race and gender groups.
- Synthetic Identity Fraud: Criminals use AI generated faces to create fake licenses, bypass verification, and commit fraud at scale.
- Surveillance Without Consent: Companies scrape billions of photos from the internet without user permission. Facial recognition turns identity into a permanent, traceable signature with no opt out.
By the Numbers
- A 2020 study by NIST found error rates 10 to 100 times higher for facial recognition on Black, Asian, and Indigenous faces compared to white faces.
- More than 25 cities in the United States banned or restricted facial recognition systems as of 2023.
- Clearview AI scraped over 3 billion facial images from the web without consent.
- → ACLU: Face Recognition Is Dangerous
- → ACLU: Wrongful Arrest and Racial Bias
- → Wired: Wrongful Arrests from Facial Recognition
Ethical Reflection
When your face becomes your password, your location tracker, and your ID, it also becomes a liability.
Synthetic identities threaten systems from within. Facial recognition threatens people from without. Both require real boundaries and safeguards.
Identity must be protected, not weaponized by automation.
Clause 6.7: Biometric Data Integrity and Protection Against Synthetic Exploitation
Systems using biometric data must secure that information, obtain informed consent, and guard against synthetic impersonation. Facial recognition must not be deployed in sensitive spaces without human review and oversight.