Beyond AI Compliance: Designing Integrity at Scale
Dr Alwin Tan, MBBS, FRACS, EMBA (University of Melbourne), AI in Healthcare (Harvard Medical School)
Senior Surgeon | Governance Leader | HealthTech Co-founder |Harvard Medical School — AI in Healthcare |
Australian Institute of Company Directors — GAICD candidate
University of Oxford — Sustainable Enterprise
Institute for Systems Integrity (ISI)
Abstract
Healthcare AI governance has entered a new phase.
Standards exist.
Guidance is published.
Institutional oversight structures are in place.
This marks a transition from whether AI should be governed
to whether governance designs can hold under pressure.
Observed system pattern
Across healthcare systems, a consistent pattern is emerging:
- Oversight responsibility is decentralised
- Assurance capacity varies widely
- Learning remains institution-bound
- Accountability diffuses as complexity increases
The outcome is formal compliance with uneven integrity.
Failure mode: responsibility fragmentation
When AI governance relies primarily on local institutional processes:
- The evaluation effort is duplicated
- safety thresholds drift
- Bias detection becomes inconsistent
- Incident learning is non-portable
Over time, this produces:
- geographic variation in patient safety
- resilience narratives masking accumulated risk
- inequity as a system outcome
This pattern is predictable.
It reflects governance architecture, not individual failure.
Integrity requirement
Integrity at scale requires more than compliance.
It requires systems that:
1. Share assurance infrastructure
Baseline AI evaluation, safety metrics, and bias signals must be designed as collective assets, not isolated efforts.
2. Preserve pause under pressure
Governance systems must retain the ability to slow decisions without penalty, even when operational pressure is high.
3. Enable pattern-level oversight
Boards and regulators must govern:
- drift indicators
- near-miss density
- signal suppression
- accountability inversion
Incident review alone is insufficient.
Equity as a system signal
When assurance capacity depends on institutional resources:
- Access to safe AI becomes uneven
- Risk concentrates in lower-resourced settings
- inequity emerges by design
Equity, therefore, functions as an integrity indicator, not an external value.
ISI position
Compliance frameworks are necessary.
They are not sufficient.
Integrity requires governance systems that:
- distribute learning faster than harm
- surface weak signals early
- remain governable under stress
The challenge ahead is not to add more rules.
It is to design systems that retain responsibility as complexity scales.
Canonical close
Systems fail not when guidance is absent,
but when responsibility fragments faster than oversight can adapt.
Palmieri, S., Robertson, C.T. & Cohen, I.G. (2026) ‘New Guidance on Responsible Use of AI’, JAMA, 335(3), pp. 207–208. doi:10.1001/jama.2025.23059.
Cohen, I.G. (2026) AI is speeding into healthcare. Who should regulate it? Harvard Gazette, 12 Jan. Available at: https://news.harvard.edu/gazette/story/2026/01/ai-is-speeding-into-healthcare-who-should-regulate-it/
Joint Commission and Coalition for Health AI (2025) Guidance on Responsible Use of AI in Healthcare, 17 Sept. Available at: https://www.jointcommission.org/en-us/knowledge-library/news/2025-09-jc-and-chai-release-initial-guidance-to-support-responsible-ai-adoption