Identify one AI tool in use at your institution. Review the published validation data. Assess whether the patient population it was validated on matches yours.
AI Bias in Clinical Settings
AI tools trained on historical healthcare data inherit historical healthcare inequities. Tools trained predominantly on data from one demographic may perform worse for others — without any visible indication that performance is degraded.
Known examples: pulse oximetry AI underperforms in patients with darker skin tones. Dermatology AI tools were trained predominantly on lighter skin images and show worse performance on darker skin. Sepsis prediction models trained at academic medical centers may underperform at community hospitals.
Ask for sub-group performance data before deploying any AI tool. 'Our tool has 95% accuracy' is meaningless without knowing whose data it was measured on.
Hallucination in Clinical AI
Generative AI hallucinates — it produces confident-sounding output that is factually wrong. In a clinical context, this is dangerous. An AI that fabricates a drug interaction, invents a lab value, or misattributes a symptom to the wrong patient can cause patient harm.
The rule: never use generative AI output in clinical documentation without verifying clinical facts against the actual patient record. AI drafts the narrative — you verify the facts.
NEVER let AI generate clinical facts you haven't verified. AI can structure and write. You verify. Document it this way: AI-generated draft reviewed and verified by [your name].
HIPAA and AI Tools
Using patient data with AI tools requires HIPAA compliance. Consumer AI tools (free ChatGPT, free Claude) are NOT HIPAA compliant. Using patient information with non-compliant AI tools is a HIPAA violation.
Approved tools: tools with Business Associate Agreements (BAAs), or institution-approved AI tools with data governance. Enterprise versions of major AI products (Claude for Enterprise, ChatGPT Enterprise) have BAAs available.
Key Points
- Check with your institution's compliance or informatics team before using any AI tool with patient data
- De-identified information can be used with non-compliant tools — remove name, DOB, MRN, and other direct identifiers
- Your institution may have approved AI tools available — ask before finding workarounds
Using AI Responsibly in Clinical Practice
The principle: AI assists. You decide. Document AI use when it materially contributed to a clinical decision. Review AI output critically — the same way you review information from any source that isn't your own direct observation.
AI is not a colleague. It cannot be held accountable. You can. Build workflows that keep you in the loop on every clinical decision AI touches.
Key Points
- Use AI for drafting and efficiency — not for clinical reasoning you haven't done yourself
- Document AI-assisted documentation appropriately per your institution's policy
- Report AI errors through your institution's safety reporting system — this is how the field improves
- Maintain clinical competence — using AI for documentation doesn't reduce your responsibility to know the clinical material
Day 4 Complete
- Understand AI bias and how to check for it in tools you use
- Know the hallucination risk in generative AI and how to mitigate it
- Understand HIPAA implications for AI tool use
- Have a framework for responsible AI use in clinical practice
Building AI Literacy in Your Department
Day 5 covers how to introduce AI responsibly — training your team, setting policies, and keeping clinicians in control.
Day 5: Building AI Literacy in Your Department