Governing AI in Healthcare: A Practical Integrity Architecture

AI governance does not fail at approval. It fails when drift, workload, and accountability pressures appear after deployment. This paper outlines a practical integrity architecture for governing AI in real clinical systems.

Governing AI in Healthcare: A Practical Integrity Architecture

Dr Alwin Tan, MBBS, FRACS, EMBA (University of Melbourne), AI in Healthcare (Harvard Medical School)

Senior Surgeon | Governance Leader | HealthTech Co-founder |Harvard Medical School — AI in Healthcare |
Australian Institute of Company Directors — GAICD candidate
University of Oxford — Sustainable Enterprise

Institute for Systems Integrity (ISI)

Healthcare AI governance is moving fast. Guidance exists. Committees are forming. Accreditation expectations are emerging. 

But governance fails most often after adoption—when performance drifts, workload rises, and accountability diffuses.

This follow-up sets out a practical approach that treats AI governance as an integrity system, not a compliance exercise.


1) Start with the governing question

The core question is not “Is the tool approved?”

It is:

Can this organisation detect and correct drift before harm becomes visible?

That is an integrity test.

Because in real clinical environments, model performance depends on implementation context—workflow, staffing, training, and local data. That “context dependence” is a key reason healthcare AI cannot be governed like static devices alone. 


2) Use a layered governance model

A workable healthcare AI governance design needs four layers, each doing what it is uniquely good at.

Layer A — Product regulation (baseline safety for what counts as a device)

Regulators define which AI systems are regulated medical devices and what evidence is required (e.g., FDA guidance for AI-enabled devices; Australia’s TGA guidance on AI and medical device software). 

Integrity principle: Regulatory approval is necessary, not sufficient. It does not guarantee safe performance in every site.

Layer B — Accreditation and standards (operational expectations)

Accreditation guidance (e.g., Joint Commission + CHAI’s RUAIH) sets expectations for governance, monitoring, reporting, transparency, and training. 

Integrity principle: Accreditation helps normalise minimum practice—but if the burden is entirely local, capability becomes uneven.

Layer C — Shared assurance infrastructure (portable evidence, shared learning)

A key “missing middle” is shared assurance: evaluation methods, test datasets, bias assessment patterns, and reporting that can travel across sites.

This is the rationale behind proposals for a network of health AI assurance laboratories

Integrity principle: When every hospital repeats the same evaluation in isolation, learning is slow and inequity becomes structural.

Layer D — Local clinical governance (workflow embedding and human oversight)

Local governance must own what cannot be centralised:

  • implementation and workflow integration
  • training and competency
  • “human override” pathways
  • escalation rules when signals appear
  • monitoring tied to local outcomes and populations

Australia’s AI Clinical Use Guide is a good example of practical “before/while/after use” thinking at the clinical layer. 

Integrity principle: Local governance is where real-world safety is made—or lost.


3) Define accountability so it cannot leak

Most AI failures become governance failures when accountability is blurred.

A practical design uses three explicit accountability objects:

(1) Model Owner (product accountability)

Owns: intended use, evidence base, known limitations, change pathway.

If the model is adaptive or updated, you need a defined “change plan” approach and controls (e.g., FDA’s Predetermined Change Control Plan guidance for AI-enabled devices). 

(2) Clinical Owner (use accountability)

Owns: where it is used, how it changes decisions, how clinicians are trained, what “safe use” looks like.

(3) System Owner (integrity accountability)

Owns: monitoring, bias surveillance, incident pathways, procurement gating, and board-visible reporting.

Integrity rule: if you can’t name these owners for a tool, the tool isn’t governed.


4) Govern the lifecycle, not the launch

Governance must be built around a lifecycle:

Before deployment

  • Scope: intended use, exclusions, decision rights
  • Evidence: what is known, what is uncertain
  • Risk tier: clinical risk drives monitoring intensity
  • Equity: subgroup performance expectations are explicit
  • Implementation test: “does it work here?”

(Aligns with RUAIH’s emphasis on local validation and monitoring. )

During use

  • Drift detection: performance over time
  • Near-miss capture: “bad outputs caught by humans” are leading indicators
  • Automation bias checks: where trust becomes uncritical
  • Operational load: does it reduce burden or create new work?

After change or update

  • Change control: what changed, why, and what the impact is
  • Re-validation: risk-tiered retesting and rollout
  • Incident learning: shareable patterns, not local blame

(Consistent with regulatory thinking about controlled iterative improvement. )


5) Make drift visible to boards

Boards cannot govern “AI” as a concept. They can govern signals.

A board-visible AI integrity pack should include:

  • Inventory: all AI tools in use (clinical + operational)
  • Risk tiering: where failure would harm patients
  • Monitoring status: what is monitored and how often
  • Equity signal: performance across key subgroups
  • Near-miss density: frequency of “caught before harm” events
  • Escalations: pauses, suspensions, overrides
  • Change log: updates and their validations

This turns AI from a novelty into a governable risk domain—aligned with “responsible use” guidance that emphasises governance, monitoring, and reporting. 


6) Use a common risk language

Most organisations struggle because they mix:

  • technical risk terms
  • clinical safety language
  • legal/compliance framing

A practical approach is to adopt an overarching risk management vocabulary such as NIST AI RMF (Govern–Map–Measure–Manage) for risk structure, paired with healthcare-specific guidance for clinical use. 

And for organisational governance maturity, an AI management system standard such as ISO/IEC 42001 can provide management-system scaffolding (policies, roles, audits, continual improvement). 

Integrity principle: the point is not paperwork. The point is shared language that supports early detection and action.


7) The ISI governance position

A post-compliance AI governance approach should be built around a simple design commitment:

Safety, equity, and learning must scale together.

That requires:

  • layered governance (regulation + accreditation + shared assurance + local oversight)
  • explicit ownership (model / clinical / system)
  • lifecycle controls (before / during / after change)
  • board-visible drift signals
  • shared assurance infrastructure to avoid postcode-dependent integrity

The healthcare system does not need more statements about “responsible AI.”
It needs integrity architecture.


References

  • Cohen, I.G. (2026) ‘AI is speeding into healthcare. Who should regulate it?’, Harvard Gazette, 12 January. Available at: (Harvard Gazette page) 
  • Joint Commission and Coalition for Health AI (CHAI) (2025) ‘JC and CHAI release initial guidance to support responsible AI adoption’, 17 September. Available at: (Joint Commission news release) 
  • Joint Commission and Coalition for Health AI (CHAI) (2025) The Responsible Use of AI in Healthcare (RUAIH). Available at: (PDF) 
  • Palmieri, S., Robertson, C.T. and Cohen, I.G. (2026) ‘New Guidance on Responsible Use of AI’, JAMA, 335(3), pp. 207–208. 
  • Shah, N.H. et al. (2024) ‘A Nationwide Network of Health AI Assurance Laboratories’, JAMA, 331(3), pp. 245–249. 
  • U.S. Food and Drug Administration (FDA) (2025) Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions. Available at: (FDA guidance) 
  • National Institute of Standards and Technology (NIST) (2023) Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. Available at: (PDF) 
  • International Organization for Standardization (ISO) (2023) ISO/IEC 42001:2023 Artificial intelligence — Management system. Available at: (ISO standard page) 
  • Australian Commission on Safety and Quality in Health Care (2025) AI Clinical Use Guide – Guidance for clinicians (Version 1.0, August 2025). Available at: (Commission page/PDF) 
  • Therapeutic Goods Administration (TGA) (2025) ‘Artificial Intelligence (AI) and medical device software’, last updated 4 September 2025. Available at: (TGA page