Slide 1 of 20

AI Ethics

Fairness and Bias in Algorithms

Lesson 3: Understanding How AI Can Inherit Human Biases

Evolve AI Institute

Essential Question

How can AI systems inherit human biases, and what can we do to create fairer technology?

Learning Objectives

By the end of this lesson, you will be able to:

Warm-Up: The Hiring Algorithm

You are an AI hiring system trained on past hiring decisions...

Your Task:

  1. Review the candidate profiles provided
  2. Rank candidates based on the limited data
  3. Compare your rankings with others
  4. Discuss: What influenced your decisions?

Think: If an AI learned from thousands of decisions like yours, what patterns might it learn?

What is Bias?

Bias: A tendency to favor or oppose something based on preconceived notions rather than objective evidence.

Human Bias

  • Unconscious preferences
  • Cultural influences
  • Limited experiences
  • Social conditioning

Algorithmic Bias

  • Learned from data
  • Reflects human biases
  • Systematic outcomes
  • Can amplify inequality

Algorithmic Bias Defined

Algorithmic Bias: When AI systems produce unfair outcomes that systematically disadvantage certain groups of people.

Think of it this way:

If you teach a child using only books that show doctors as men, they might think women can't be doctors. AI learns from data the same way—it can only learn what's in its training data.

How Bias Enters AI Systems

Training Data Bias

Historical data reflects past discrimination

Example: If hiring data shows mostly men in tech roles, AI might prefer male candidates

Sampling Bias

Data doesn't represent all groups equally

Example: Facial recognition trained mostly on light-skinned faces

Design Bias

Choices made by developers about features

Example: Prioritizing certain attributes over others

Feedback Loops

Biased outcomes create more biased data

Example: Predictive policing sends more officers to same areas

Where AI Bias Appears

Criminal Justice

Risk assessment algorithms

Healthcare

Diagnostic tools and resource allocation

Employment

Resume screening and hiring

Financial Services

Loan approvals and credit scoring

Facial Recognition

Security and identification systems

Education

Admissions and placement decisions

Video: Fighting Bias in Algorithms

Joy Buolamwini: How I'm fighting bias in algorithms

TED Talk (9 minutes)

Watch for:
  • Examples of facial recognition bias
  • Impact on marginalized communities
  • The "coded gaze" concept
  • Solutions and advocacy efforts

Note: We'll pause for discussion at key points

Discussion: Video Reflection

Why is facial recognition bias a problem?

Who is harmed when AI systems don't work equally for everyone?

Is AI bias the AI's fault, or is it something humans created?

What surprised you most in this video?

Case Study Analysis: Bias Detectives

Your Mission:

Investigate real-world cases of algorithmic bias

Four Cases to Explore:
  • Case 1: COMPAS Criminal Risk Assessment
  • Case 2: Amazon's Recruiting Tool
  • Case 3: Facial Recognition Technology
  • Case 4: Healthcare Algorithms

Time: 30 minutes to investigate and prepare presentation

Investigation Questions

Use these questions to guide your analysis:

  1. What was the AI system designed to do?
  2. What bias or unfairness was discovered?
  3. How did this bias get into the system?
  4. Who was harmed by this bias?
  5. What could have been done differently?
  6. What lessons can we learn from this case?

Remember: Create a poster with visual elements to share your findings!

Case 1: COMPAS Risk Assessment

Background

Algorithm used by courts to predict likelihood of reoffending

The Problem

ProPublica investigation found racial disparities in risk scores

Impact

Influenced sentencing, parole, and bail decisions affecting thousands

Case 2: Amazon's Recruiting Tool

Background

AI system to screen resumes and identify top candidates

The Problem

System learned to penalize resumes containing words associated with women

Impact

Company ultimately scrapped the tool in 2018

Case 3: Facial Recognition

Background

Technology used for security, identification, and surveillance

The Problem

Error rates up to 35% higher for darker-skinned individuals

Impact

Wrongful arrests, privacy violations, discriminatory surveillance

Case 4: Healthcare Algorithms

Background

Algorithms to allocate healthcare resources and predict needs

The Problem

Used healthcare spending as proxy for health needs, disadvantaging Black patients

Impact

Reduced access to needed care for millions of vulnerable patients

Creating Fair AI: Key Principles

Fairness

Equal treatment regardless of demographics

Transparency

People should know when AI affects them

Accountability

Humans responsible for outcomes

Diversity

Inclusive development teams

Testing

Evaluation across diverse groups

Human Oversight

Critical decisions need human review

Our AI Ethics Charter

As Future AI Developers and Users, We Commit To:

  • Prioritize fairness and equity in all AI systems
  • Ensure transparency in how AI makes decisions
  • Hold ourselves and others accountable for AI outcomes
  • Advocate for diverse perspectives in AI development
  • Test AI thoroughly across all populations
  • Maintain human oversight for high-stakes decisions
  • Speak up when we see bias or unfairness
  • Continuously learn about AI ethics and justice

Key Takeaways

AI systems can and do inherit human biases through training data and design choices

Biased AI can have serious real-world consequences, especially for marginalized communities

Creating fair AI requires diverse teams, thoughtful design, rigorous testing, and ongoing monitoring

We all have a role in advocating for ethical AI development and use

Questions?

Remember: Technology reflects the values of its creators.

Let's create AI that works fairly for everyone.

Next Steps:

  • Complete your reflection essay
  • Continue investigating AI bias in the news
  • Share what you learned with others

Evolve AI Institute