Evolve AI Institute

Answer Keys & Discussion Facilitation Notes

Lesson 11: AI in Healthcare – Diagnosis and Treatment

Table of Contents

  1. Case Study Sample Answers
  2. Ethical Dilemma Discussion Guides
  3. Assessment Question Answer Keys
  4. Discussion Facilitation Strategies
  5. Common Student Questions and Responses
  6. Misconception Identification and Correction

1. Case Study Sample Answers

Case Study 1: AI Detects Lung Cancer on CT Scans

Medical Problem: Early-stage lung cancer detection, which has historically been challenging because small tumors may be missed by human radiologists, especially in complex images with overlapping structures.

How AI Works: The AI system was trained on a dataset of over 50,000 annotated CT scans, including confirmed cancer cases and healthy scans. It uses deep learning (convolutional neural networks) to analyze patterns in lung tissue, examining each slice of the CT scan, identifying suspicious nodules, and classifying their likelihood of being cancerous based on size, shape, density, and location.

Technology Type: Computer Vision, Machine Learning (Deep Learning with CNNs)

Positive Outcomes:

  1. Detected 94% of cancers in validation studies, including some missed by experienced radiologists
  2. Reduced false positive rate by 11%, meaning fewer unnecessary biopsies
  3. Enabled earlier detection, potentially leading to more successful treatment outcomes
  4. Reduced radiologist workload by flagging high-priority cases for immediate review

Effectiveness: 94% sensitivity (catching 94/100 actual cancers) and 86% specificity (correctly identifying 86/100 healthy scans). When radiologists used AI as a “second reader,” overall diagnostic accuracy improved by 5–8%.

Limitations/Risks:

  1. AI trained primarily on certain demographics may be less accurate for underrepresented populations
  2. System may fail to detect cancers with unusual presentations not well-represented in training data
  3. Over-reliance on AI might cause radiologists to miss cancers the AI fails to flag
  4. AI cannot consider patient symptoms, history, or other contextual factors only humans can integrate

Ethical Concerns: Privacy (who accesses data during AI training?), bias (some groups may receive less accurate diagnoses), accountability (if AI misses a cancer, who is responsible?), trust (will patients trust AI?), access (will expensive tools only be in wealthy hospitals?).

Human-AI Collaboration: Radiologists review all AI-flagged cases and make final diagnostic decisions. AI serves as a “second reader” but humans retain ultimate authority.

Case Study 2: AI Recommends Personalized Cancer Treatment

Medical Problem: Different patients respond differently to cancer treatments based on genetic factors. Traditional “one-size-fits-all” approaches may be ineffective for some patients while causing unnecessary side effects.

How AI Works: ML algorithms analyze patient genetic profiles, tumor genetic sequencing, electronic health records, treatment outcomes from thousands of similar patients, and medical research literature. Uses collaborative filtering and predictive modeling to match patients with optimal treatment protocols.

Technology Type: Machine Learning (multiple algorithms), Predictive Analytics, Data Mining, NLP

Effectiveness: AI-guided treatment selection improved complete response rates from 45% (standard) to 61% (AI-personalized). Patients also experienced fewer severe side effects.

Ethical Concerns: Privacy (genetic data is highly sensitive), autonomy (right to refuse AI-recommended treatments?), equity (personalized medicine only for wealthy?), bias (historical data may reflect treatment biases), informed consent (do patients understand how AI made recommendations?).

Case Study 3: AI Predicts Hospital Readmission Risk

Medical Problem: Approximately 20% of hospital patients are readmitted within 30 days, often due to preventable complications.

How AI Works: ML models analyze EHR data including diagnosis, prior hospitalizations, medications, lab results, vital signs, age, social determinants of health, and clinical notes. Calculates risk score for 30-day readmission.

Effectiveness: 82% sensitivity; reduced readmission rate from 18% to 13% after implementation; saved $10,000+ per prevented readmission.

Ethical Concerns: Fairness (are certain groups more likely flagged?), resource allocation, labeling effects (does “high risk” flag change how staff treat patients?), insurance implications.

2. Ethical Dilemma Discussion Guides

Dilemma 1: “The Conflicting Recommendation”

An AI diagnostic system recommends a biopsy for a suspicious lung nodule, assigning a 72% probability of malignancy. However, the experienced radiologist believes the nodule is benign based on its characteristics and patient history. The patient is anxious. Should the doctor follow the AI’s recommendation or their own judgment?

Key Issues: Professional autonomy vs. algorithmic support; balancing different types of evidence; patient anxiety and informed consent; possibility that both could be wrong.

Discussion Facilitation:

  1. Opening: “If you were the patient, what would you want the doctor to do?”
  2. Probing: “What additional information would help?” “Is 72% high enough to justify a biopsy?” “Should the doctor’s opinion get more weight than the AI?”
  3. Solutions: Get second opinion, use additional tests, have detailed conversation with patient, establish hospital protocols for conflicting recommendations.
Strong Student Responses: Recognize legitimate concerns on multiple sides; avoid simplistic conclusions; consider patient autonomy; propose solutions preserving both human judgment and AI benefits; acknowledge that medicine involves uncertainty.

Dilemma 2: “The Biased Algorithm”

A hospital implements an AI to predict kidney failure risk. After one year, researchers discover it significantly underestimates risk for Black patients because the AI was trained on historical data where Black patients received different treatment. The hospital must decide: continue using the flawed system, shut it down, or try to correct it while in use.

Key Issues: Algorithmic bias perpetuating disparities; historical inequities in training data; harm to marginalized groups vs. benefits to others; urgency vs. maintaining some benefit.

Discussion Facilitation:

  1. Opening: “What would you do if you were the hospital CEO?”
  2. Probing: “Is it ethical to benefit one group while harming another?” “How did the AI learn to be biased?” “What if shutting down means some patients miss early interventions?”
  3. Solutions: Immediate shutdown with manual assessment, continued use with manual override for affected patients, rapid correction with fairness-aware techniques, transparent communication, community oversight board.
Strong Student Responses: Recognize algorithm reflects systemic issues; acknowledge harm while considering impacts of shutting down; propose interim solutions; consider transparency and community involvement; understand fixing bias requires more than technical adjustments.

Dilemma 3: “The Insurance AI”

An insurance company wants to use AI to predict which individuals are likely to develop expensive chronic diseases over the next 5 years. They claim it’s for preventive wellness programs, but critics worry it could be used for pricing or coverage denial. The AI is 85% accurate and no laws prevent later use for pricing decisions.

Key Issues: Beneficial vs. discriminatory use of same technology; privacy of health predictions; business interests vs. public health; trust when legal protections are limited.

Discussion Facilitation:

  1. Opening: “Should insurance companies be allowed to use AI to predict who will get sick?”
  2. Solutions: Legal prohibition on using predictions for pricing, mandatory third-party audits, opt-in systems, government regulation, transparency requirements, public health programs as alternative.

3. Assessment Question Answer Keys

Case Study Analysis Worksheet – Key Points

Questions 1–3 (Understanding): Students should demonstrate that AI systems address specific medical problems, work by learning from large datasets, use identifiable technologies, and require both technical and medical expertise.

Questions 4–6 (Outcomes): Students should identify multiple specific benefits (not just “faster” or “more accurate”), provide quantitative data when available, and compare AI to human capabilities fairly.

Questions 7–9 (Critical Analysis): Students should identify meaningful limitations, discuss ethical concerns using appropriate frameworks (privacy, equity, autonomy, beneficence), and understand humans remain central to decision-making.

Questions 10–12 (Personal Reflection): Expect thoughtful, nuanced responses with clear reasoning and realistic improvement suggestions.

Questions 13–15 (Connections): Look for specific community connections, creative but plausible applications, and questions demonstrating curiosity.

Exit Ticket Answer Keys

Option 1 (3-2-1) – Expected Learning Points:

Option 2 (Trust) – Common Patterns: “Yes, probably” (35–40%) is most common, conditional on human oversight. Priority ethical concerns: algorithmic bias and equity (40%), privacy and data security (30%), accountability (20%), access (10%).

4. Discussion Facilitation Strategies

Socratic Questioning Techniques

LevelTypeExample Questions
1Clarification“What do you mean by [term]?” “Can you give me an example?”
2Probing Assumptions“What are you assuming about [stakeholder]?” “Is that always true?”
3Probing Reasoning“How did you reach that conclusion?” “What evidence supports your position?”
4Exploring Implications“If that’s true, what else must be true?” “Who would benefit? Who might be harmed?”
5Questioning the Question“Is this the right question?” “What assumptions does this question make?”

Managing Difficult Discussions

When Students Oversimplify: Ask probing questions; “That’s one perspective. What might someone on the other side say?”; present counterexamples.

When Students Are Silent: Use think-pair-share; offer sentence starters; lower stakes with neighbor discussions; normalize multiple perspectives.

When Discussion Becomes Heated: Redirect to evidence; acknowledge emotions; establish norms; take a break for individual written reflections.

When Students Express Personal Health Experiences: Thank them for sharing; connect to concepts without dwelling; ensure supportive space; follow up privately if concerned.

Equity and Inclusion in Discussions

5. Common Student Questions and Responses

“Will AI replace doctors?”

Short Answer: No. AI will change what doctors do but not replace them.

Detailed: AI is excellent at specific tasks like pattern recognition, but healthcare requires empathy, communication with anxious patients, decisions in ambiguous situations, and building trust. Think of AI as a tool that augments doctors’ abilities—like how calculators didn’t replace mathematicians. Doctors of the future will work with AI, focusing more on the human aspects of medicine.

“If AI is so accurate, why don’t we just use it alone?”

Short Answer: Even highly accurate AI makes mistakes, and healthcare decisions require contextual understanding AI lacks.

Detailed: If AI correctly identifies 95 out of 100 cancers, it still misses 5. Every patient matters, so even small error rates have significant consequences. AI is trained on past data, so it performs best on typical cases. It can’t consider a patient’s full context—other conditions, preferences, social situation. The best approach combines AI’s pattern recognition with human contextual understanding.

“Can’t programmers just fix the bias in AI?”

Short Answer: It’s much more complicated than just “fixing” code.

Detailed: Bias comes from multiple sources. Training data from hospitals that historically provided different care to different groups encodes inequity the AI learns. Even removing race/gender, AI can infer from proxies like zip codes. “Fair” has multiple conflicting definitions. Fixing bias requires diverse teams, diverse data, ongoing monitoring, and willingness to update systems—it’s not a simple bug fix.

“Is my medical data being used to train AI without my permission?”

Short Answer: It depends on healthcare system policies, but this is an important concern.

Detailed: Under HIPAA, de-identified data can be used for research without explicit permission, but even de-identified data can sometimes be re-identified. Many privacy advocates argue patients should have more control over their health data use. Encourage critical thinking: “If you had a rare disease and your data could help train AI to help others, how would you feel? What protections would you want?”

“What if AI makes a mistake and someone dies—who goes to jail?”

Short Answer: This is an unsettled legal question courts are still working through.

Detailed: Liability could fall on the doctor (who made the final decision), the AI company (if the system was flawed), or the hospital (for implementation). In most cases, the treating physician remains primarily responsible. Legal frameworks are evolving as AI becomes more autonomous. This is why clear documentation of how AI was used in decision-making is so important.

“Can AI understand human emotions and pain?”

Short Answer: AI can recognize patterns related to emotions but doesn’t experience or understand them.

Detailed: AI might detect crying and classify it as “sadness,” but it doesn’t comprehend what sadness means. In healthcare, understanding patient fears, hopes, and suffering is central to good care. AI can assist by flagging distressed patients or providing information, but emotional connection must come from humans. This is one reason the human dimension of healthcare will remain irreplaceable.

6. Misconception Identification and Correction

Misconception Matrix

MisconceptionWhy It ArisesCorrection Strategy
“AI is always more accurate than humans”Media hype; selective reportingShow examples where AI fails; discuss limited domains
“AI can read doctors’ minds or know things magically”Misunderstanding of MLExplain training data and pattern recognition
“Biased AI is the result of racist programmers”OversimplificationExplain historical data bias and structural factors
“Healthcare AI is science fiction”Limited awarenessShare multiple real-world current examples
“AI will make healthcare cheaper for everyone”Optimistic assumptionsDiscuss implementation costs, access barriers
“If AI is involved, humans aren’t responsible”Misunderstanding of accountabilityClarify human-AI collaboration; legal responsibility
“AI recommendations are objective and unbiased”Belief technology is neutralShow how training data contains human biases
“Privacy isn’t an issue if data is anonymous”Misunderstanding of re-identificationExplain de-identification limitations
“AI can replace medical school/training”Overestimation of AIDiscuss essential human skills AI can’t replicate

Correction Techniques

  1. Pre-Assessment: Brief questionnaire revealing common misconceptions before teaching
  2. Cognitive Conflict: Present evidence that contradicts the misconception
  3. Explicit Refutation: “Many people think [misconception], but actually [correction] because [evidence]”
  4. Multiple Examples: Show several cases demonstrating the correct concept
  5. Metacognitive Reflection: “What made you think that initially? How has your understanding changed?”

Common Misconception Scenarios

Student says: “AI is basically the same as Google search”

Google search finds existing information matching keywords—it retrieves web pages humans wrote. Healthcare AI learns patterns from data and makes predictions that don’t exist anywhere yet. A diagnostic AI doesn’t search for “what does this X-ray show?”—it learned from millions of X-rays what patterns indicate cancer and applies that learned pattern recognition to new, never-before-seen X-rays. Google helps find what others already know; AI helps discover new insights from data.

Student says: “If the AI was trained on racist data, just use different data”

That’s part of the solution, but much more complicated. The “racist data” is often just regular medical records from hospitals that historically provided different care to different groups. We can’t always tell where bias is hidden. Getting better data is sometimes impossible—if a group has been historically underserved, there aren’t enough medical records. That’s a catch-22. Even with better data, we must define what “fair” means, and different definitions conflict. Better data is important but it’s just one piece of a larger puzzle including testing, monitoring, transparency, and diverse teams.

Student says: “AI will definitely replace doctors because computers are smarter”

Challenge the assumption: What does “smart” mean? AI is excellent at pattern recognition but can’t comfort a crying child, explain a diagnosis to a worried family, make ethical decisions without clear “right” answers, or adapt to unique situations it never learned about. Healthcare is about understanding people, their fears, values, and circumstances. AI will change what doctors spend time on—handling routine analysis, freeing them to focus on the human aspects of care.

Closing Notes for Teachers

This answer key supports, not replaces, your professional judgment. Strong student responses may look different than these examples while still demonstrating mastery.

Remember that discussing healthcare and medical conditions can be sensitive. Create a classroom environment where topics can be explored thoughtfully while respecting students’ privacy and emotional wellbeing.

The field of healthcare AI evolves rapidly. While core ethical principles and concepts remain relevant, specific examples and statistics may need updating. Encourage students to engage with current news and research.