Back to All Lessons
Grades 6-8 Social Studies 90 Minutes

Lesson 3: AI Ethics - Fairness and Bias in Algorithms

A critical examination of how AI systems can inherit human biases and the importance of creating fair, equitable artificial intelligence systems that serve all people.

Learning Objectives

  • Define algorithmic bias and explain how human biases can be encoded into AI systems through training data and design choices
  • Analyze real-world case studies where AI bias has led to unfair outcomes in areas like criminal justice, hiring, and healthcare
  • Evaluate the societal impact of biased AI systems and understand how they can perpetuate or amplify existing inequalities
  • Propose solutions for creating fairer AI systems and develop an ethical framework for AI development

Standards Alignment

  • CSTA 2-IC-20: Compare tradeoffs associated with computing technologies that affect people's everyday activities and career options
  • CSTA 2-IC-23: Describe tradeoffs between allowing information to be public and keeping information private and secure
  • NCSS D2.Civ.10.6-8: Explain the relevance of personal interests and perspectives, civic virtues, and democratic principles when people address issues and problems in government and civil society
  • ISTE 2.1.c: Use technology to seek feedback that informs and improves their practice and to demonstrate their learning in a variety of ways
  • Common Core ELA CCSS.ELA-LITERACY.RH.6-8.7: Integrate visual information with other information in print and digital texts

Materials Needed

  • Computer or tablet with internet access (one per group of 3-4 students)
  • Case study handouts: "Algorithmic Bias in Action" (4 different scenarios provided in downloadable materials)
  • Student worksheet: "Bias Detective Investigation Sheet"
  • Chart paper and markers for group brainstorming
  • Projector or large screen for video clips and class presentations
  • Video clips: "Joy Buolamwini: How I'm fighting bias in algorithms" (TED Talk - 9 min) or similar age-appropriate content
  • Sticky notes for interactive voting activity
  • Optional: Access to news articles about recent AI bias incidents

Lesson Procedure

  1. Hook - The Hiring Algorithm Simulation (15 minutes)

    Begin with an engaging simulation that demonstrates bias in action. Tell students they will play the role of an AI hiring system.

    Activity Setup:

    1. Present fictional job applicant profiles (include names suggesting different demographics, hobby indicators, zip codes)
    2. Ask students to quickly rank candidates based only on limited data points
    3. Reveal that many made similar choices - but based on unconscious patterns
    4. Discuss: "Did we judge fairly? What information influenced our decisions? What if an AI learned from our choices?"

    Key Discussion Points:

    • How did certain data points influence your decisions?
    • What assumptions did you make based on limited information?
    • If an AI learned from thousands of hiring decisions like ours, what patterns might it learn?
    • Is this fair? Why or why not?

    Introduce the essential question: "How can AI systems inherit human biases, and what can we do to create fairer technology?"

  2. Direct Instruction - Understanding Algorithmic Bias (20 minutes)

    Present core concepts through a multimedia approach combining lecture, visuals, and video content.

    Key Concepts to Cover:

    1. What is Algorithmic Bias?

    • Definition: When AI systems produce unfair outcomes that systematically disadvantage certain groups
    • Analogy: "If you teach a child using only books that show doctors as men, they might think women can't be doctors. AI learns from data the same way."

    2. How Bias Enters AI Systems:

    • Training Data Bias: Historical data reflects past discrimination (example: if historical hiring data shows mostly men in tech roles, AI might prefer male candidates)
    • Sampling Bias: Data doesn't represent all groups equally
    • Design Bias: Choices made by developers about what features to include or prioritize
    • Feedback Loops: Biased outcomes create more biased data, making the problem worse

    3. Real-World Impact:

    • Criminal Justice: Predictive policing and risk assessment algorithms
    • Healthcare: Diagnostic tools that work better for some demographics
    • Hiring: Resume screening systems that filter out qualified candidates
    • Financial Services: Loan approval algorithms with racial disparities
    • Facial Recognition: Systems that misidentify people of color at higher rates

    Video Segment: Show 5-7 minutes of Joy Buolamwini's TED Talk on algorithmic bias, pausing for discussion at key points.

    Discussion Questions:

    • "Why is facial recognition bias a problem?"
    • "Who is harmed when AI systems don't work equally for everyone?"
    • "Is AI bias the AI's fault, or is it something humans created?"
  3. Case Study Analysis - Bias Detectives (30 minutes)

    Students work in groups to analyze real-world case studies of algorithmic bias.

    Group Activity Structure:

    Divide class into four groups, assigning each a different case study:

    Case Study 1: COMPAS Criminal Risk Assessment

    • Background: Algorithm used to predict recidivism rates
    • Problem: ProPublica investigation found racial disparities in predictions
    • Impact: Influenced sentencing and parole decisions

    Case Study 2: Amazon's Recruiting Tool

    • Background: AI system to screen resumes
    • Problem: Learned to penalize resumes with "women's" indicators
    • Impact: Company scrapped the tool

    Case Study 3: Facial Recognition Technology

    • Background: Used for security and identification
    • Problem: Error rates up to 35% higher for darker-skinned individuals
    • Impact: Wrongful arrests, privacy concerns, surveillance issues

    Case Study 4: Healthcare Algorithms

    • Background: Algorithms to allocate healthcare resources
    • Problem: Used healthcare spending as proxy for health needs, disadvantaging Black patients
    • Impact: Reduced access to needed care for vulnerable populations

    Investigation Questions (on worksheet):

    1. What was the AI system designed to do?
    2. What bias or unfairness was discovered?
    3. How did this bias get into the system?
    4. Who was harmed by this bias?
    5. What could have been done differently?
    6. What lessons can we learn from this case?

    Groups create a poster summarizing their findings with visual elements, data, and key takeaways.

  4. Group Presentations and Class Discussion (15 minutes)

    Each group presents their case study findings (3-4 minutes per group).

    Presentation Format:

    • Brief overview of the case
    • Explanation of the bias discovered
    • Impact on people's lives
    • Proposed solutions

    Class Discussion Questions:

    • "What patterns do you notice across all these cases?"
    • "Are some groups affected more than others? Why?"
    • "Who should be responsible for preventing AI bias?"
    • "Should we stop using AI in high-stakes decisions like criminal justice or hiring?"
    • "How can we balance the benefits of AI with the risks of bias?"

    Record key insights and themes on chart paper for reference during the solution-building activity.

  5. Solution Building - Creating an AI Ethics Framework (10 minutes)

    Working as a whole class, develop a set of ethical principles for AI development.

    Brainstorming Activity:

    Ask students: "If you were creating rules for AI developers, what guidelines would you establish?"

    Sample Principles (guide students toward):</strong>

    • Fairness: AI should treat all people equitably regardless of demographics
    • Transparency: People should know when AI is making decisions about them
    • Accountability: Humans must be responsible for AI outcomes
    • Diverse Development: AI teams should include people from different backgrounds
    • Testing: AI should be tested across diverse populations before deployment
    • Human Oversight: Important decisions should always involve human review
    • Explainability: AI decisions should be understandable and challengeable

    Create a class "AI Ethics Charter" that students can sign, posting it in the classroom.

Assessment Strategies

Formative Assessment

  • Observation during case study analysis - depth of investigation and critical thinking
  • Quality of group discussion contributions and questions asked
  • Completion and thoroughness of investigation worksheet
  • Participation in whole-class discussions and solution-building
  • Informal checks for understanding through questioning during direct instruction

Summative Assessment

  • Group presentation on case study findings (scored with rubric)
  • Individual reflection essay: "Why AI Fairness Matters" (1-2 pages)
  • Create an infographic explaining algorithmic bias to younger students
  • Optional: Design proposal for a fair AI system in a specific domain
  • Quiz on key concepts: bias types, real-world examples, ethical principles

Success Criteria

Students demonstrate mastery when they can:

  • Define algorithmic bias with accurate examples
  • Explain at least 2 ways bias enters AI systems
  • Analyze a case study identifying bias, impact, and solutions
  • Articulate 3-5 ethical principles for AI development
  • Connect AI bias to broader social justice themes
  • Propose realistic solutions for creating fairer AI

Differentiation Strategies

For Advanced Learners:

  • Research technical solutions to bias: adversarial debiasing, fairness constraints, diverse training data strategies
  • Investigate the mathematical definitions of fairness (demographic parity, equalized odds, calibration)
  • Analyze actual code examples showing how bias can be measured and mitigated
  • Develop a detailed proposal for an AI ethics review board in their school or community
  • Study the intersection of AI bias with multiple aspects of identity (intersectionality)

For Struggling Learners:

  • Provide simplified case studies with guided questions and sentence starters
  • Offer graphic organizers to structure analysis and note-taking
  • Allow verbal presentations instead of written reports
  • Pre-teach key vocabulary with visual supports and real-world examples
  • Provide more scaffolding during group work with specific role assignments
  • Focus on one detailed case study instead of comparing multiple

For English Language Learners:

  • Provide vocabulary lists with images and translations
  • Use videos with subtitles and pause frequently for comprehension checks
  • Allow students to discuss in native language before presenting in English
  • Provide sentence frames for discussions: "I think ___ because ___", "This bias affects ___ by ___"
  • Partner with strong English speakers for peer support
  • Accept multi-modal presentations (drawings, diagrams, video)

For Students with Different Needs:

  • Offer flexible seating and breaks during longer activities
  • Provide written copies of oral instructions
  • Allow use of assistive technology for reading and writing
  • Offer choice in presentation format (poster, slideshow, video, oral report)
  • Break lessons into smaller chunks with clear transitions

Extension Activities

Research Project - AI Bias in the News:

Students monitor news sources for recent stories about AI bias. Create a class database of incidents, categorizing by type of bias, domain affected, and resolution status. Present findings monthly.

Cross-Curricular Connections:

  • Math: Analyze datasets for representation disparities. Calculate bias metrics. Explore statistical concepts of false positives/negatives.
  • Language Arts: Write persuasive essays on AI regulation. Analyze media coverage of AI bias incidents. Create public service announcements.
  • History: Connect to historical civil rights movements. Study how technology has been used both to advance and hinder equality.
  • Art: Create visual representations of abstract concepts like algorithmic fairness. Design infographics explaining bias to general audiences.

Community Action Project:

Identify a local organization using AI systems (police department, hospital, employer). Research their AI policies. Write a letter with recommendations based on ethical framework developed in class. Present to school board or community leaders.

Debate Activity:

Organize formal debates on AI ethics topics:

  • "Should facial recognition be banned in public spaces?"
  • "Should AI be used in criminal sentencing decisions?"
  • "Do the benefits of AI in healthcare outweigh the risks of bias?"
  • "Should AI developers be legally liable for biased outcomes?"

Guest Speaker Series:

Invite professionals working on AI ethics: researchers, advocates, policymakers, or technologists. Prepare student questions in advance. Follow up with thank-you letters including key learnings.

Creative Challenge - Design Fair AI:

Students design an AI application for a specific purpose (school admissions, healthcare triage, job recruitment) with explicit fairness considerations built in from the start. Create detailed proposals including:

  • Purpose and users
  • Data requirements and sources
  • Potential biases and mitigation strategies
  • Testing plan across diverse groups
  • Oversight and accountability measures
  • Plan for addressing problems if they arise

Teacher Notes and Tips

Sensitive Topics Guidance:

This lesson touches on issues of race, justice, and inequality. Create a safe, respectful environment for discussion:

  • Establish ground rules for respectful dialogue
  • Emphasize that bias is often unconscious, not intentional
  • Focus on systems and structures, not individual blame
  • Acknowledge discomfort as part of learning
  • Be prepared to address questions about personal experiences with bias
  • Have resources available if students want to discuss topics privately

Common Misconceptions to Address:

  • Misconception: "Computers can't be biased because they're objective."
    Clarification: AI learns from human-created data that reflects human biases. Computers are only as objective as their training data and design.
  • Misconception: "Bias is always intentional."
    Clarification: Most AI bias is unintentional, resulting from incomplete data or overlooked design assumptions.
  • Misconception: "We should just stop using AI."
    Clarification: AI can be beneficial when designed thoughtfully. The goal is to create fairer AI, not abandon it entirely.

Preparation Tips:

  • Review case studies thoroughly before teaching - be prepared for detailed questions
  • Preview video content and identify best clips if time is limited
  • Prepare current examples - AI bias is actively being discovered and addressed
  • Have statistics and data ready to support discussions
  • Consider inviting administrators or counselors to sit in given sensitive topics

Facilitation Strategies:

  • Use "think-pair-share" for sensitive questions - let students process privately first
  • Employ Socratic questioning to deepen analysis without lecturing
  • Validate multiple perspectives while grounding discussion in evidence
  • If discussion becomes heated, pause and refocus on specific case details
  • Use anonymous question submission if students are hesitant to speak up

Follow-Up and Continuity:

  • This lesson works well as part of a broader AI unit
  • Consider revisiting ethical themes when teaching other AI topics
  • Keep the AI Ethics Charter visible and reference it in future lessons
  • Update case studies yearly as new examples emerge

Download Lesson Materials

Access all lesson materials, case studies, worksheets, presentation slides, and assessment tools. Each file can be downloaded individually.