The Approval Illusion: Why Boards Must Govern AI as a Living Clinical Risk System — and Why Vendors Must Share the Burden of Harm
AI in healthcare is not a product to approve but a system to govern. This ISI paper examines the failure of “approve and go” models and argues that boards must oversee decision quality and require vendors to share clinical risk alongside clinicians.
Dr Alwin Tan, MBBS, FRACS, EMBA (Melbourne Business School)
Senior Surgeon | Governance Leader | HealthTech Co-founder
Harvard Medical School — AI in Healthcare
Australian Institute of Company Directors — GAICD candidate
University of Oxford — Sustainable Enterprise
Institute for Systems Integrity (ISI)
Governance. Integrity. Systems.
Abstract
Healthcare AI is increasingly deployed under governance models designed for static technologies. This creates a structural failure: boards approve AI systems as if they were capital equipment, while clinicians absorb the downstream clinical, legal, and reputational risk when those systems fail. This paper argues that AI in healthcare is not a product but a dynamic clinical risk system requiring continuous oversight across its lifecycle. It further contends that current accountability models are misaligned, and that AI vendors must share in medical risk proportionate to their influence on clinical decision-making. Without this shift, organisations face a widening gap between decision authority and risk exposure, undermining patient safety, clinician trust, and governance integrity.
1. The Governance Error: Treating AI as a Static Asset
Boards are accustomed to approving technologies that are:
- stable in function
- predictable in performance
- governed through maintenance and calibration
This model does not hold for AI.
AI systems are:
- context-dependent
- data-sensitive
- update-driven
- vulnerable to drift and degradation post-deployment
Regulators now recognise this. The U.S. Food and Drug Administration (FDA) and Therapeutic Goods Administration (TGA) both frame AI oversight as a total product lifecycle responsibility, requiring continuous monitoring, performance evaluation, and post-market surveillance (FDA 2025; TGA 2026).
Implication for boards:
Approval is not a control point.
It is the beginning of exposure.
2. AI as a Living Clinical Risk System
Unlike traditional devices, AI performance is contingent on:
- local data distributions
- workflow integration
- clinician interaction patterns
- environmental and population changes
The literature on dataset shift demonstrates that models can degrade when real-world conditions diverge from training environments (Finlayson et al. 2021). More recent work highlights the need for continuous recalibration, monitoring, and governance structures to detect drift and unintended consequences (Lee et al. 2026; You et al. 2025).
ISI Position:
AI should be governed as a living system embedded within clinical decision pathways, not as a standalone tool.
3. The Accountability Gap: Power at the Top, Risk at the Edge
Current deployment models exhibit a consistent pattern:
| Function | Location |
|---|---|
| Approval | Board / Executive |
| Procurement | Organisation |
| Design | Vendor |
| Deployment | Digital / IT |
| Use | Clinician |
| Accountability (when harm occurs) | Clinician |
This creates a structural asymmetry:
Those who decide to deploy AI are not those who carry the consequences of its failure.
Legal scholarship confirms that liability frameworks for AI remain fragmented and unclear, often defaulting to clinician responsibility despite system-level influences (Mello & Guha 2024).
Empirical evidence suggests that clinicians may be judged more harshly in scenarios involving AI, even when they deviate from flawed recommendations (Bernstein et al. 2025).
Result:
A governance system that externalises risk downward while retaining decision authority upward.
4. Automation Bias and the Limits of “Human Oversight”
The assumption that clinicians provide a safety backstop is increasingly challenged.
Research on automation bias shows that users tend to:
- over-trust automated outputs
- reduce independent verification
- defer to system recommendations under pressure
(Challen et al. 2019)
In high-intensity clinical environments, “human-in-the-loop” often becomes:
- time-constrained
- cognitively overloaded
- structurally dependent on the system
The World Health Organization (WHO) emphasises that human oversight must be meaningful, supported, and contextually appropriate, not merely symbolic (WHO 2021).
ISI Position:
Unchecked reliance on “human oversight” risks becoming a mechanism for liability transfer rather than a control function.
5. The Case for Vendor Risk Participation
AI vendors increasingly influence:
- diagnostic pathways
- treatment recommendations
- triage prioritisation
- clinical decision framing
Yet, in most models, they bear limited direct clinical risk once products are approved and deployed.
This creates a moral and governance hazard:
Influence without accountability.
In other high-risk industries, system designers and operators share responsibility for outcomes. Healthcare AI should be no different.
Principles for Vendor Risk Participation
- Shared Liability Frameworks
Vendors should carry proportional liability where system outputs materially influence clinical decisions. - Performance Accountability in Real-World Use
Responsibility should extend beyond validation datasets to live clinical environments. - Obligations for Monitoring and Drift Detection
Vendors must actively support post-deployment surveillance and recalibration. - Transparency of Model Limitations
Clear articulation of boundaries, uncertainties, and failure modes. - Contractual Alignment of Risk
Procurement agreements should explicitly define risk-sharing mechanisms.
ISI Position:
Without vendor participation in clinical risk, the system incentivises deployment over safety.
6. What Boards Must Now Govern
Boards must shift from technology approval to system oversight.
Key governance questions include:
- How is model performance monitored in real time?
- What signals indicate drift, bias, or degradation?
- Who has authority to pause or withdraw the system?
- How are clinician concerns captured and escalated?
- What protections exist for clinicians exercising judgment?
- Where does legal and professional accountability sit?
- Do vendor contracts reflect their influence on outcomes?
This aligns with emerging frameworks that position AI governance as a continuous assurance process, not a compliance checkpoint (FDA 2025; IMDRF 2025).
7. Reframing AI Governance: From Approval to Stewardship
The core shift required is conceptual:
| Old Model | New Model |
|---|---|
| Product approval | System stewardship |
| Static performance | Dynamic monitoring |
| Human as backup | Human as supported decision-maker |
| Vendor as supplier | Vendor as accountable partner |
| Compliance | Continuous assurance |
ISI Principle:
AI governance is not about approving technology.
It is about maintaining the integrity of decision-making under changing conditions.
Conclusion
Healthcare AI exposes a critical governance fault line:
- Decision authority is centralised
- Risk is decentralised
- Accountability is misaligned
If boards continue to treat AI as a procurement decision rather than a living risk system, failure will not be visible at the point of approval—but will emerge later, in practice, under pressure, and often at the expense of clinicians.
The future of safe AI in healthcare depends on one shift:
From approving systems
to governing their behaviour—
and from assigning blame
to sharing responsibility.
ISI Closing Line
AI does not fail like a machine.
It fails like a system under pressure.
And systems under pressure reveal the truth of governance.
References (Harvard style)
Bernstein, MH, et al. 2025, ‘Impact of AI on perceived legal liability’, NEJM AI.
Challen, R, Denny, J, Pitt, M, Gompels, L, Edwards, T & Tsaneva-Atanasova, K 2019, ‘Artificial intelligence, bias and clinical safety’, BMJ Quality & Safety, vol. 28, no. 3, pp. 231–237.
Finlayson, SG, Subbaswamy, A, Singh, K, et al. 2021, ‘The clinician and dataset shift in artificial intelligence’, New England Journal of Medicine, vol. 385, no. 3, pp. 283–286.
Food and Drug Administration 2025, Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, FDA.
International Medical Device Regulators Forum 2025, Good Machine Learning Practice for Medical Device Development, IMDRF.
Lee, JH, et al. 2026, ‘Diverging regulatory approaches to adaptive AI’, npj Digital Medicine.
Mello, MM & Guha, N 2024, ‘Understanding liability risk from using health care artificial intelligence’, New England Journal of Medicine.
Therapeutic Goods Administration 2026, Artificial intelligence (AI) and medical device software regulation, Australian Government.
World Health Organization 2021, Ethics and governance of artificial intelligence for health, WHO.
You, JG, et al. 2025, ‘Framework for real-world healthcare AI implementation’, npj Digital Medicine.