🏛️ When AI Writes the Discharge Summary : A Governance, Duty, and Systems Integrity Challenge

Generative AI can produce discharge summaries with impressive completeness and fluency. Emerging evidence shows persistent safety risks, reframing discharge automation as a governance and systems integrity challenge rather than a productivity solution.

🏛️ When AI Writes the Discharge Summary : A Governance, Duty, and Systems Integrity Challenge

Institute for Systems Integrity (ISI)

Dr Alwin Tan, MBBS, FRACS, EMBA (Melbourne Business School)

Senior Surgeon | Governance Leader | HealthTech Co-founder
Harvard Medical School — AI in Healthcare
Australian Institute of Company Directors — GAICD candidate
University of Oxford — Sustainable Enterprise

Healthcare organisations are rapidly adopting generative AI.

Among the most attractive early use cases:

• Discharge summaries
• Patient-centred instructions
• Medication communication

The promise is compelling:

👉 Faster documentation
👉 Reduced clinician burden
👉 Standardised outputs

But emerging evidence signals something deeper:

This is not merely a technology decision.
It is a governance decision.


🏥 Discharge Communication Is Safety-Critical Infrastructure

Discharge summaries are often treated as administrative artefacts.

In reality, they function as:

✔ Clinical handover mechanisms
✔ Medication safety controls
✔ Legal records
✔ Continuity-of-care bridges

Failures propagate across organisational boundaries:

• Primary care
• Community pharmacy
• Home care
• Patient self-management

The World Health Organization identifies transitions of care as one of the highest-risk moments for medication harm (WHO, 2019).

Errors here are not clerical.

They are systemic.


🤖 What the Evidence Signals

Recent evaluations of GPT-based systems reveal a consistent pattern.

✅ Strengths

• High medication inclusion rates
• Strong narrative structuring
• Improved readability
• Rapid draft generation

⚠️ Risks

• Hallucinations
• Incorrect instructions
• Safety issues despite completeness
• Higher error rates in complex patients

A recent study evaluating GPT-4o for patient-centred discharge medication instructions reported:

👉 95% completeness
👉 ~69% potential safety issues

(Tang et al., 2026)

Earlier clinician-reviewed studies similarly identified non-trivial rates of potentially harmful issues, including hallucinated or incorrect content (Stanceski et al., 2024).


⚠️ Governance Insight #1

Completeness Metrics ≠ Safety Assurance

Boards must guard against metric substitution error:

Where operational performance indicators are mistaken for safety validation.

A discharge summary may be:

✔ Complete
✔ Fluent
✔ Structured

Yet still:

❌ Clinically unsafe
❌ Legally risky
❌ Reputationally damaging

Under established governance principles, directors are responsible for ensuring that risk management and safety systems are effective, not merely efficient (AICD, 2023).


⚠️ Governance Insight #2

Generative AI Introduces New Risk Classes

AI-generated discharge communication creates exposures beyond traditional documentation risk:

✔ Model hallucination risk
✔ Automation bias risk
✔ Accountability ambiguity
✔ Clinical safety risk
✔ Equity & bias risk

Technology capability without matching control systems increases organisational fragility.

As articulated in ISI’s prior work:

Integrity Protection Stack™
Performance does not eliminate risk. Controls contain it.

⚠️ Governance Insight #3

Automation Bias Alters Control Environments

Automation bias — the tendency to over-trust automated outputs — is well documented (Lyell & Coiera, 2017).

In discharge workflows this may lead to:

• Reduced verification vigilance
• Delayed error detection
• False confidence in AI-generated text

ISI’s Failure Taxonomy™ classifies this as:

👉 Cognitive Control Erosion Failure

Where human oversight weakens because outputs appear reliable.


⚠️ Governance Insight #4

Risk Distribution Is Uneven

Emerging evidence suggests safety issues may increase for:

• Older patients
• Higher complexity patients

(Tang et al., 2026)

This introduces:

✔ Patient safety risk
✔ Ethical risk
✔ Health equity risk
✔ Regulatory scrutiny risk

Consistent with ISI’s systems integrity principle:

“Risk That Clusters Becomes Governance Risk.”

🏛️ The Director Duty Lens

Under Australian governance expectations, boards must ensure:

✔ Robust safety and quality systems
✔ Effective risk oversight
✔ Clear accountability structures
✔ Prudent technology adoption

Key board-level questions include:

1️⃣ Has safety validation preceded deployment?
2️⃣ How are hallucination risks detected and audited?
3️⃣ Are error rates analysed by patient subgroup?
4️⃣ Who holds accountability for AI-generated clinical communication?
5️⃣ Does human review remain a true control — or symbolic reassurance?

As governance doctrine consistently emphasises:

Directors govern consequences, not intentions.

(AICD, 2023)


🧭 The Systems Integrity Position

Generative AI at discharge is best framed as:

A safety-critical system augmentation

Not:

❌ A documentation shortcut
❌ A productivity tool
❌ A cost-reduction lever

Most defensible deployments position AI as:

✔ Drafting assistant
✔ Omission detector
✔ Consistency validator
✔ Variance flagger

Rather than:

❌ Autonomous clinical authority

Because in safety-critical systems:

Efficiency gains must never outpace integrity controls.


📚 References (Harvard Style)

Australian Institute of Company Directors (AICD) (2023) Director Tools: Risk Oversight & Governance. Sydney: AICD.

Lyell, D. and Coiera, E. (2017) ‘Automation bias and verification complexity: a systematic review’, Journal of the American Medical Informatics Association, 24(2), pp. 423–431.

Stanceski, K. et al. (2024) ‘The quality and safety of using generative AI to produce patient-centred discharge instructions’, NPJ Digital Medicine.

Tang, M. et al. (2026) ‘Assessing the safety of patient-centred discharge medication instructions generated by an AI model’, International Journal of Medical Informatics.

World Health Organization (2019) Medication Safety in Transitions of Care. Geneva: WHO.

Institute for Systems Integrity (ISI) (2025) Integrity Protection Stack™.

Institute for Systems Integrity (ISI) (2025) Failure Taxonomy™.