Dilemma 1
The Overruled Diagnosis
Dr. Sarah Chen has 15 years of experience as a radiologist. She reviews a mammogram and concludes the patient has normal, healthy breast tissue with no signs of cancer.
However, the hospital's AI diagnostic system analyzes the same mammogram and flags it as "high probability of malignancy" (85% confidence). The AI has detected subtle calcification patterns that Dr. Chen interprets as benign.
The AI system was trained on 1.2 million mammograms and has 94% accuracy in clinical trials—slightly higher than average human radiologist accuracy of 91%.
Dr. Chen reviews the images again but still sees no concerning features. She believes the AI is wrong this time.
Discussion Questions:
- Should Dr. Chen follow her clinical judgment or the AI recommendation?
- What factors should influence the decision (patient risk factors, AI confidence level, Dr. Chen's expertise)?
- If Dr. Chen overrides the AI and cancer is later found, who is responsible?
- Should hospitals have policies requiring doctors to justify overriding AI recommendations?
- How would you feel as the patient in this situation?
Consider perspectives of: Patient, Dr. Chen, Hospital administration, AI developers, Patient's family, Insurance company, Medical licensing board
Dilemma 2
Biased Algorithm
Researchers discover that a widely-used AI system for predicting hospital readmission risk consistently underestimates risk for Black patients compared to white patients with identical medical conditions.
The algorithm was trained on historical hospital data. Because Black patients historically had less access to healthcare, they had fewer hospital visits in the training data, making the AI less accurate for this population.
The system is currently used in 200+ hospitals nationwide to allocate intensive case management resources to highest-risk patients. Due to the bias, Black patients are less likely to receive these preventive services.
Fixing the algorithm will take 18-24 months and cost $15 million. In the meantime, should hospitals continue using it?
Discussion Questions:
- Should hospitals immediately stop using the biased AI system, even though it helps some patients?
- Is using a biased algorithm better than having no algorithm at all to guide resource allocation?
- Who should pay for fixing the algorithmic bias—the AI company, the hospitals, or insurance?
- How can we prevent algorithmic bias from occurring in the first place?
- Should patients be informed when AI systems used in their care have known biases?
Consider perspectives of: Black patients, White patients, Hospital administrators, AI developers, Healthcare equity advocates, Insurance companies, Government regulators
Dilemma 3
Data Privacy vs. Medical Progress
A pharmaceutical company wants to develop an AI system to predict which Alzheimer's patients will respond to a new drug. To train the AI, they need access to comprehensive medical records from 500,000 patients, including:
- Complete medical history
- Genetic data
- Brain scans
- Cognitive test results
- Treatment outcomes
A major hospital system has this data from their patients. The data would be "de-identified" (names removed), but experts warn that sophisticated analysis could potentially re-identify individuals.
The pharmaceutical company offers to pay the hospital $20 million for the data and promises the AI system will be made available to all hospitals, potentially helping millions of Alzheimer's patients receive more effective treatment.
Patients provided their medical information for treatment, not research. Obtaining individual consent from 500,000 patients is impractical.
Discussion Questions:
- Should the hospital share the patient data for AI training? Why or why not?
- Does "de-identification" adequately protect patient privacy?
- Does the potential benefit to millions of future patients justify using data without individual consent?
- Should patients "own" their medical data and have control over its use?
- How would your answer change if the pharmaceutical company planned to profit from the AI system rather than share it freely?
Consider perspectives of: Patients whose data would be used, Alzheimer's patients who could benefit, Hospital administrators, Pharmaceutical company, Privacy advocates, Medical researchers, Family members of Alzheimer's patients
Dilemma 4
Expensive Precision vs. Affordable Standard Care
Maria has Stage III breast cancer. There are two treatment approaches:
Option A - Standard Protocol: Evidence-based chemotherapy regimen used for all patients with her cancer type. Success rate: 65%. Total cost: $120,000. Covered by her insurance.
Option B - AI-Guided Precision Medicine: Genomic testing ($7,000) plus AI analysis recommending personalized drug combination based on her tumor's genetic profile. Predicted success rate: 82%. Total cost: $245,000.
Maria's insurance company denies coverage for Option B, stating "genomic testing and AI treatment recommendations are experimental and not medically necessary when effective standard treatment exists."
Maria cannot afford to pay $125,000 out-of-pocket for the precision approach. Her oncologist believes Option B gives her significantly better chances.
Discussion Questions:
- Should Maria's insurance be required to cover the more expensive AI-guided treatment?
- Is a 17% improvement in success rate (65% vs 82%) significant enough to justify double the cost?
- If insurance covers expensive precision medicine for some patients, will it increase premiums for everyone?
- Should access to AI precision medicine depend on ability to pay?
- What if only wealthy patients can afford AI-guided treatment—does this worsen healthcare inequality?
Consider perspectives of: Maria (patient), Maria's family, Oncologist, Insurance company, Other insurance policyholders, Low-income patients, Healthcare economists, Hospital billing department
Dilemma 5
The Prediction You Don't Want to Know
Jason, age 32, undergoes genomic testing as part of cancer treatment. The AI system analyzes his DNA not only for cancer-related mutations but scans for all genetic conditions.
The AI discovers Jason has a gene variant that gives him 75% chance of developing early-onset Alzheimer's disease by age 50. There is currently no cure or prevention for this genetic form of Alzheimer's.
Jason did not consent to screening for Alzheimer's risk—he only agreed to cancer-related genetic analysis. He is currently healthy with no symptoms.
His doctor faces a dilemma: Should she tell Jason about the Alzheimer's prediction?
Arguments for telling: Jason has a right to know his genetic information; he can make life decisions accordingly; he could participate in research trials; family members might want to be tested.
Arguments against telling: No treatment available; will cause psychological distress; wasn't what Jason consented to; could affect his insurance, employment, and relationships; prediction might be wrong.
Discussion Questions:
- Should the doctor tell Jason about the Alzheimer's prediction? Why or why not?
- Do patients have a "right not to know" genetic information about untreatable conditions?
- Should AI systems analyze genetic data beyond the specific medical question asked?
- How would you want to be handled if you were Jason?
- Should genetic information that predicts future disease be treated differently than information about current conditions?
Consider perspectives of: Jason (patient), Jason's doctor, Jason's family members, Genetic counselors, Insurance companies, Employers, Alzheimer's researchers, Bioethicists
Dilemma 6
Rural Hospital Triage
County General Hospital serves a rural area with 30,000 residents. The hospital has limited resources: 25 hospital beds, 4 ICU beds, 15 doctors.
An AI system costs $500,000 to implement and $100,000 annually to maintain. The AI would provide:
- Sepsis prediction (estimated 5 lives saved annually)
- Patient deterioration alerts
- Optimized bed management
However, the hospital also desperately needs:
- New MRI machine ($750,000) - current one is 20 years old, frequently breaks
- Two additional ICU beds ($300,000) - regularly turns patients away
- Upgraded emergency department ($400,000) - equipment outdated
The hospital's annual budget surplus is $600,000. They must prioritize.
A wealthy urban hospital 100 miles away has both the AI system AND all the other equipment. County General serves mostly low-income patients.
Discussion Questions:
- Should County General invest in AI or other equipment? Why?
- How do you weigh 5 lives potentially saved by AI versus broader benefits of other equipment?
- Should rural hospitals be required to have the same AI technology as wealthy urban hospitals?
- Should the government provide funding to help rural hospitals adopt AI healthcare tools?
- Is it ethically acceptable for wealthy hospitals to have life-saving AI while poor hospitals cannot afford it?
Consider perspectives of: Hospital administrators, Rural community members, Patients who could benefit from AI, Doctors and nurses, State health department, Taxpayers, Urban hospital patients, Equipment manufacturers
Dilemma 7
The Black Box Decision
An AI system recommends that Thomas, a 55-year-old heart patient, should NOT receive a life-saving transplant. The AI analyzed his medical records and assigned him a "transplant success score" of 34 out of 100—below the threshold of 60 required for transplant approval.
Thomas's cardiologist reviews the case and believes Thomas is a good candidate who could live 15+ years with a new heart. However, the AI's recommendation carries significant weight because:
- The AI was trained on outcomes from 50,000 heart transplant patients
- The AI has proven 12% more accurate than human doctors at predicting transplant success
- Insurance companies increasingly use AI recommendations to approve/deny transplant coverage
- Only 4,000 donor hearts available annually for 50,000+ patients who need them
The Problem: The AI system is a "black box"—it cannot explain why it gave Thomas a low score. The cardiologist doesn't know what factors the AI weighted most heavily or whether they're appropriate.
Possible unknown factors influencing AI: Age? Zip code (as proxy for socioeconomic status)? Race? Minor health issues Thomas's doctor considers irrelevant? Training data bias?
Discussion Questions:
- Should Thomas be denied the transplant based on the AI's recommendation?
- Does it matter that the AI can't explain its reasoning?
- Should AI systems be required to provide explainable reasoning before being used for life-or-death decisions?
- When AI and human expert disagree, who should have final say in life-or-death decisions?
- Given limited donor hearts, should the most accurate predictor (AI) determine who gets transplants?
Consider perspectives of: Thomas (patient), Thomas's family, Cardiologist, Other patients on transplant waiting list, Organ donation organization, Insurance company, AI developers, Medical ethicists, Donor families
Dilemma 8
Surveillance for Safety
A nursing home implements an AI monitoring system to prevent falls and medical emergencies among elderly residents with dementia. The system includes:
- Cameras in all resident rooms (including bathrooms)
- Sensors in beds and floors
- Wearable devices tracking heart rate, movement, and location
The AI analyzes this data 24/7 to:
- Predict when resident is about to fall (alert staff to prevent injury)
- Detect if resident has fallen and needs help
- Identify health deterioration (irregular heartbeat, unusual behavior patterns)
- Alert staff when resident is in distress
Results: Falls reduced by 73%, emergency hospitalizations down 45%, staff can monitor 30 residents more effectively than before.
Concerns:
- Residents have dementia and cannot consent to constant surveillance
- Cameras capture residents in vulnerable moments (bathing, toileting, changing clothes)
- System tracks and records all movements and activities
- Data could potentially be hacked or misused
- Some family members uncomfortable with constant monitoring
- Residents who wandered freely now can't move without AI alert
Discussion Questions:
- Does preventing injuries and saving lives justify constant surveillance?
- Should residents with dementia have the same privacy rights as cognitively intact individuals?
- Who should make the decision about surveillance—residents (who may not understand), family members, or nursing home?
- Are there ways to gain the safety benefits while respecting privacy (e.g., no bathroom cameras, motion sensors only)?
- Would you want a family member in a nursing home with this level of AI monitoring? Why or why not?
Consider perspectives of: Residents with dementia, Family members, Nursing home staff, Nursing home administrators, Privacy advocates, Healthcare regulators, Elderly rights organizations, Insurance companies