top of page
AI in healthcare.jpg

About the Project

This research project was developed as part of my Persuading Ethical UX Design course at the University of Washington, where I explored frameworks for designing technology that respects human values and prevents harm.

After studying ethical design principles and moral psychology throughout the course, I focused my final project on a critical real-world question: How can we predict and prevent moral outrage when implementing AI in healthcare?

I chose to apply Jonathan Haidt's Six Moral Foundations Theory - a psychological framework that explains why people have strong emotional reactions to ethical violations to anticipate potential PR crises before AI healthcare products launch.

📥 Download my full document here

The Problem & My Research Process

6784cfb06ed56f2ddae0dba5_What Is AI in Healthcare_ Definition & Types of AI in the Medical

The Problem 

  • AI in healthcare risks public backlash when ethical concerns aren't addressed

  • Companies struggle to predict what will trigger moral outrage

  • Two critical scenarios: AI-generated health content & patient data usage

My Research Process 

  • Applied Haidt's Six Moral Foundations Theory to predict user reactions

  • Mapped each foundation to AI healthcare scenarios

  • Developed ethical UX framework with actionable design guidance

Six Moral Foundations: Risk Analysis

Care/Harm

Patient safety compromised

​​

AI diagnoses need human review; security prevents breaches

❤️

Loyalty/Betrayal

Corporate interests over patients

​​

Balance AI with human expertise; never sell data without consent

🤝

Sanctity/Degradation

Reducing patients to data

​​

Consider emotions in advice; never automate life-death decisions

Fairness/Cheating

Discriminatory decisions

​​

Consistent advice across groups, diverse, audited training data

⚖️

Authority/Subversion

Bypassing regulations

​​

Medical professionals approve content; strict regulatory compliance

👔

Liberty/Oppression

Forced consent patterns

​​

Clear opt-out options; opt-in data sharing only

🗽

Ethical UX Framework: Key Deliverables

Pre-Launch Checklist
  • ​Does feature prioritize patient safety?
  • Tested for bias across diverse users?
  • Transparent about data practices?
  • Medical oversight & regulatory approval?
  • Respects patient dignity & emotions?
  • Users can opt out easily?
Red Flag Indicators
  • No explanation for AI recommendations
  • Dark patterns in consent flows
  • Unclear accountability for errors
  • Data practices hidden in ToS
  • Different care quality by income

View Full Document

Reflection & Next Steps

Key Insight: Moral outrage is predictable if we analyze ethical dimensions beforehand. By grounding design decisions in moral psychology, we can create AI systems that feel trustworthy and human-centered from day one.

Limitations: While theory provides predictive power, real-world validation is essential. Recommended next steps include user testing with actual patients, stakeholder interviews with healthcare providers, and cross-cultural analysis of moral foundations.

Keywords: UX Research, Ethical Design, Healthcare UX, AI/ML Ethics, Risk Analysis, Framework Development, Moral Psychology

bottom of page