Cover of a case study report on synthetic identities and facial recognition surveillance, featuring a profile of a human head with facial recognition dots, a fake face icon, a surveillance camera icon, and identity card icons.

Case Study 10: Synthetic Identities and Facial Recognition Surveillance

An ethical investigation into the rising risks of identity manipulation and biometric surveillance in public and private systems

Background

As facial recognition AI spreads globally, a new risk has emerged: synthetic identities. These are AI generated faces and personas used to fool verification systems and avoid accountability. At the same time, governments and corporations are deploying facial recognition surveillance with little transparency, oversight, or consent.

From Clearview AI's massive image scraping to wrongful arrests based on misidentification, the threats affect both individuals and society at large. These technologies can target innocent people or allow criminals to operate behind digital masks.

Ethical Concerns

  • False Positives and Wrongful Accusations: Facial recognition systems have led to multiple wrongful arrests, especially of Black men. Accuracy varies across race and gender groups.
  • Synthetic Identity Fraud: Criminals use AI generated faces to create fake licenses, bypass verification, and commit fraud at scale.
  • Surveillance Without Consent: Companies scrape billions of photos from the internet without user permission. Facial recognition turns identity into a permanent, traceable signature with no opt out.

By the Numbers

Ethical Reflection

When your face becomes your password, your location tracker, and your ID, it also becomes a liability.

Synthetic identities threaten systems from within. Facial recognition threatens people from without. Both require real boundaries and safeguards.

Identity must be protected, not weaponized by automation.

Clause 6.7: Biometric Data Integrity and Protection Against Synthetic Exploitation

Systems using biometric data must secure that information, obtain informed consent, and guard against synthetic impersonation. Facial recognition must not be deployed in sensitive spaces without human review and oversight.

Related Resources

← Back to Case Study Library