Case Study 3

Facial Recognition Technology Bias

When AI Can't See Everyone: The Dangers of Biased Recognition Systems

Background

What is Facial Recognition?

Facial recognition technology uses artificial intelligence to identify or verify individuals by analyzing their facial features. These systems:

  • Scan a person's face using cameras or photos
  • Create a unique "faceprint" based on facial geometry, features, and patterns
  • Compare this faceprint against a database of known faces
  • Determine identity or verify that someone is who they claim to be

Current Uses

Facial recognition is used in many sectors:

  • Law Enforcement: Identifying suspects, finding missing persons
  • Security: Airport screening, building access control
  • Banking: Identity verification for account access
  • Retail: Store security and customer tracking
  • Social Media: Photo tagging and organization
  • Education: Attendance tracking, campus security
  • Healthcare: Patient identification

The Promise

Advocates argue facial recognition can:

  • Enhance public safety by catching criminals
  • Improve security and reduce fraud
  • Make identification faster and more convenient
  • Help find missing persons

The Problem: Racial and Gender Bias

Major Research Findings

MIT Researcher Joy Buolamwini's Study (2018)

Researcher Joy Buolamwini (whose TED Talk you watched) and Dr. Timnit Gebru tested facial recognition systems from Microsoft, IBM, and Face++ on photos of light-skinned and dark-skinned individuals:

  • Light-skinned males: Less than 1% error rate
  • Dark-skinned females: Up to 35% error rate

This means the technology was 35 times more likely to misidentify a dark-skinned woman than a light-skinned man.

NIST Study (2019)

The National Institute of Standards and Technology tested 189 facial recognition algorithms from 99 developers:

  • False positives (wrong matches) were 10-100 times higher for Asian, Black, and Native American faces compared to white faces
  • Women and elderly people had higher error rates than men and younger adults
  • The problem was consistent across nearly all algorithms tested

Real-World Failures

These biases have led to serious consequences:

  • Robert Williams (2020): Wrongfully arrested in Detroit based on false facial recognition match. Mr. Williams, a Black man, spent 30 hours in jail before being released.
  • Michael Oliver (2019): Another Black man wrongfully arrested in Detroit due to facial recognition error.
  • Amara Majeed (2018): Brown University student falsely identified by facial recognition as the Sri Lanka Easter bombing suspect.

Why This Is Dangerous

Higher error rates for people of color mean:

  • Innocent people are wrongly identified as suspects
  • False arrests and criminal accusations
  • Loss of freedom, employment, reputation
  • Psychological trauma and fear
  • Disproportionate surveillance of minority communities

How Did Bias Enter the System?

Root Causes:

Training Data Imbalance

Most facial recognition systems were trained primarily on photos of white faces:

  • Many training datasets were 70-80% light-skinned individuals
  • Some datasets had very few or no dark-skinned faces
  • Women of color were especially underrepresented

Result: The AI learned to recognize patterns in light skin better than dark skin.

Technical Challenges with Darker Skin

Photography and imaging technology has historically been optimized for lighter skin tones:

  • Film and digital sensors were calibrated for Caucasian skin tones
  • Lighting conditions affect darker skin differently
  • Less contrast between facial features on darker skin in certain lighting

These technical issues compound when AI is trained on biased data.

Homogeneous Development Teams

When development teams lack diversity:

  • Testing may not include diverse faces
  • Problems may not be recognized early
  • Implications for affected communities may not be considered

Optimization for Accuracy on Majority Groups

Systems optimized for "overall accuracy" can have high performance on majority groups while failing on minorities. A system that works great for 80% of people might sound "accurate" but still fail catastrophically for 20%.

Real-World Impact

Who Was Harmed?

Wrongful Arrests and Criminal Justice

  • Innocent people arrested and jailed based on false matches
  • Permanent criminal records from false accusations
  • Trauma, fear, and loss of trust in law enforcement
  • Disproportionate impact on Black and brown communities

Surveillance and Privacy

  • Communities of color subjected to more surveillance
  • Chilling effect on freedom of movement and expression
  • Protesters and activists particularly vulnerable
  • Students of color tracked more intensively in schools

Access and Exclusion

  • Difficulty accessing services requiring facial recognition
  • Being locked out of buildings, devices, or accounts
  • Extra scrutiny and delays at airports and borders
  • Frustration from technology that "doesn't see" you

Psychological Impact

Joy Buolamwini describes wearing a white mask to be "seen" by AI systems. This highlights the psychological harm of technology that doesn't recognize your humanity.

Broader Societal Consequences

  • Erosion of civil liberties and privacy rights
  • Deepening of racial inequalities in justice and security
  • Normalization of mass surveillance
  • Loss of anonymity in public spaces
  • Power imbalance between government/corporations and citizens

What Could Have Been Done Differently?

Technical Solutions:

  • Diverse Training Data: Ensure datasets include equal representation of all skin tones, genders, ages, and ethnic backgrounds
  • Disaggregated Testing: Test accuracy separately for different demographic groups, not just overall accuracy
  • Minimum Performance Thresholds: Require equal accuracy across all groups before deployment
  • Continuous Monitoring: Track real-world performance and error rates by demographic

Policy and Oversight Solutions:

  • Regulation: Require testing and transparency before deploying facial recognition in public spaces
  • Moratoriums: Several cities have banned government use of facial recognition until bias issues are resolved
  • Consent Requirements: People should know when facial recognition is being used and have ability to opt out
  • Accountability: Clear liability for harm caused by biased systems
  • Independent Audits: Third-party testing of facial recognition systems

Ethical Questions:

  • Should facial recognition be used in law enforcement at all, even if technical bias could be eliminated?
  • Does mass surveillance conflict with democratic values?
  • Who decides how this technology is deployed?
  • Are convenience and security worth the privacy and civil liberties trade-offs?

What Companies Have Done:

  • Microsoft, IBM, and Amazon paused or stopped selling facial recognition to law enforcement
  • Some companies improved their datasets and retrained models
  • Industry groups developed ethical guidelines (though enforcement is limited)

Key Lessons Learned

Representation in Data Matters

If AI doesn't "see" diverse faces in training data, it won't work well for diverse populations in the real world.

"Good Enough" Overall Accuracy Isn't Good Enough

High average accuracy can hide terrible performance for specific groups. Disaggregated testing is essential.

High Stakes Require High Standards

When AI is used for law enforcement, access control, or other high-stakes decisions, even small error rates can cause serious harm.

Technical Problems Have Social Roots

Facial recognition bias reflects historical racism in technology design and whose faces have been considered "default."

Surveillance Technology Amplifies Existing Inequalities

Communities already over-policed face even more scrutiny with biased facial recognition.

Regulation Lags Behind Technology

Facial recognition was widely deployed before biases were discovered and addressed, showing need for precautionary principles.

Discussion Guide

Small Group Discussion Questions

Question 1: Technical and Social Factors

Facial recognition bias comes from both technical factors (training data, lighting) and social factors (who develops the technology, whose faces are considered "default"). Which factor do you think is more important to address? Why?

Question 2: Wrongful Arrests

Robert Williams was arrested and jailed for 30 hours because facial recognition misidentified him. If you were in his position, how would you feel? What should happen to the people and systems responsible?

Question 3: Risk vs. Benefit

Facial recognition could help catch criminals and find missing people, but it also enables surveillance and makes errors that harm innocent people. Do the benefits outweigh the risks? Who should decide?

Question 4: Personal Impact

Joy Buolamwini had to wear a white mask to be recognized by AI systems. What does it feel like when technology "doesn't see you"? Have you ever experienced being invisible to a system or institution?

Question 5: Solutions

Several cities have banned facial recognition. Others say we should improve the technology instead. What do you think is the best approach? Should we ban it, regulate it, improve it, or something else?

Question 6: Your School

Should schools use facial recognition for security or attendance? What are the benefits and risks? How would you feel about being tracked this way?

Whole Class Discussion

  • Why do you think facial recognition bias persisted for so long before being widely recognized?
  • How does this case connect to broader issues of racial justice and surveillance?
  • Should companies be allowed to sell facial recognition technology to law enforcement? Why or why not?
  • What rights should people have regarding facial recognition? (Right to know when it's used? Right to opt out? Right to challenge errors?)
  • How is this different from human bias in identification? Is AI better or worse than human judgment?

Additional Resources

Videos

  • Joy Buolamwini: "How I'm fighting bias in algorithms" (TED Talk)
  • "Coded Bias" (Netflix documentary featuring Joy Buolamwini)
  • Vox: "Why facial recognition is dangerous for everyone"

Articles and Research

  • Gender Shades Study by Joy Buolamwini and Timnit Gebru
  • NIST Report: "Face Recognition Vendor Test"
  • ACLU: "Face Recognition Technology" information page

Organizations Working on This Issue

  • Algorithmic Justice League (founded by Joy Buolamwini)
  • ACLU - Facial Recognition Project
  • Electronic Frontier Foundation