Lesson 3: Understanding How AI Can Inherit Human Biases
Evolve AI Institute
How can AI systems inherit human biases, and what can we do to create fairer technology?
By the end of this lesson, you will be able to:
You are an AI hiring system trained on past hiring decisions...
Think: If an AI learned from thousands of decisions like yours, what patterns might it learn?
Bias: A tendency to favor or oppose something based on preconceived notions rather than objective evidence.
Algorithmic Bias: When AI systems produce unfair outcomes that systematically disadvantage certain groups of people.
If you teach a child using only books that show doctors as men, they might think women can't be doctors. AI learns from data the same way—it can only learn what's in its training data.
Historical data reflects past discrimination
Example: If hiring data shows mostly men in tech roles, AI might prefer male candidates
Data doesn't represent all groups equally
Example: Facial recognition trained mostly on light-skinned faces
Choices made by developers about features
Example: Prioritizing certain attributes over others
Biased outcomes create more biased data
Example: Predictive policing sends more officers to same areas
Risk assessment algorithms
Diagnostic tools and resource allocation
Resume screening and hiring
Loan approvals and credit scoring
Security and identification systems
Admissions and placement decisions
TED Talk (9 minutes)
Note: We'll pause for discussion at key points
Investigate real-world cases of algorithmic bias
Time: 30 minutes to investigate and prepare presentation
Use these questions to guide your analysis:
Remember: Create a poster with visual elements to share your findings!
Algorithm used by courts to predict likelihood of reoffending
ProPublica investigation found racial disparities in risk scores
Influenced sentencing, parole, and bail decisions affecting thousands
AI system to screen resumes and identify top candidates
System learned to penalize resumes containing words associated with women
Company ultimately scrapped the tool in 2018
Technology used for security, identification, and surveillance
Error rates up to 35% higher for darker-skinned individuals
Wrongful arrests, privacy violations, discriminatory surveillance
Algorithms to allocate healthcare resources and predict needs
Used healthcare spending as proxy for health needs, disadvantaging Black patients
Reduced access to needed care for millions of vulnerable patients
Equal treatment regardless of demographics
People should know when AI affects them
Humans responsible for outcomes
Inclusive development teams
Evaluation across diverse groups
Critical decisions need human review
AI systems can and do inherit human biases through training data and design choices
Biased AI can have serious real-world consequences, especially for marginalized communities
Creating fair AI requires diverse teams, thoughtful design, rigorous testing, and ongoing monitoring
We all have a role in advocating for ethical AI development and use
Remember: Technology reflects the values of its creators.
Let's create AI that works fairly for everyone.
Evolve AI Institute