The Strategy Governance System for AI in Healthcare: Why boards must govern decision quality — not just approve technology
AI in healthcare is not a technology problem — it is a governance system challenge. This paper introduces the Strategy Governance System, showing why boards must move beyond approval to govern decision quality, signal integrity, and adaptive oversight.
Dr Alwin Tan, MBBS, FRACS, EMBA (Melbourne Business School)
Senior Surgeon | Governance Leader | HealthTech Co-founder
Harvard Medical School — AI in Healthcare
Australian Institute of Company Directors — GAICD candidate
University of Oxford — Sustainable Enterprise
Institute for Systems Integrity
1. Introduction: AI is not a technology problem
Artificial intelligence is often framed as a technological advancement in healthcare.
Faster processing.
Improved prediction.
Enhanced efficiency.
However, across industries and jurisdictions, a consistent pattern is emerging:
Failures associated with AI are rarely technological failures.
They are failures of governance.
In healthcare, this distinction is critical.
AI systems do not operate in isolation.
They are embedded within:
- clinical workflows
- organisational structures
- cultural environments
- decision-making processes
As a result:
The safety of AI is determined not by the model itself, but by the system governing its use.
This shifts the board’s role fundamentally.
Boards are not overseeing a tool.
They are governing a system of decisions under uncertainty.
2. From technology oversight to system governance
Traditional governance approaches treat AI as:
- a procurement decision
- a compliance issue
- a technical implementation
This framing is insufficient.
Leading frameworks emphasise that AI risk must be managed across its entire lifecycle, including design, deployment, monitoring, and adaptation (NIST, 2023; FDA, 2021).
Accordingly:
AI governance must be understood as a continuous system — not a one-time approval event.
Within this system, boards are responsible for:
- defining acceptable risk boundaries
- ensuring decision quality
- preserving signal integrity
- enabling timely intervention
3. The Strategy Governance System
AI implementation in healthcare can be understood as a structured governance pathway:
Strategic Choices → Input → Development → Approval → Readiness → Execution → Monitoring → Adaptation
Each stage represents a distinct control point.
Failure at any stage propagates downstream.
3.1 Strategic Choices — defining intent
Boards must ensure clarity regarding:
- the clinical or operational problem being addressed
- the value the AI system is expected to create
- the trade-offs being accepted
Ambiguity at this stage leads to:
- misaligned objectives
- inappropriate deployment
- unclear success criteria
3.2 Input — shaping decision quality
Input determines what enters the decision process.
High-quality input requires:
- external perspective
- challenge of assumptions
- scenario testing
Weak input leads to:
- overreliance on vendor claims
- untested assumptions
- narrow framing of risk
3.3 Development — constructing the strategy
At this stage, management translates intent into a structured plan.
Boards must ensure:
- logical coherence
- alignment between objectives and capabilities
- visibility of trade-offs
A common failure is:
Approval of a well-presented plan that lacks operational integrity.
3.4 Approval — acceptance of risk
Approval is frequently misunderstood.
It is not endorsement of a document.
It is acceptance of the risks embedded within the strategy.
Boards must assess:
- feasibility
- appropriateness
- sustainability
- accountability
This aligns with established governance expectations for informed oversight and due diligence (AICD, 2019).
3.5 Readiness — capacity and capability
Readiness is a critical but often under-assessed stage.
It consists of two dimensions:
Capacity (“ready and willing”)
- workforce availability
- financial resources
- data infrastructure
- organisational alignment
Capability (“able”)
- clinical integration
- operational workflows
- digital maturity
- regulatory compliance
A consistent finding across sectors is that:
Strategies fail when ambition exceeds capacity and capability.
3.6 Execution — interaction with reality
Execution is where strategy encounters the complexity of real-world systems.
In healthcare, this includes:
- variability in patient presentation
- workforce constraints
- competing operational pressures
Failure at this stage often manifests as:
- workarounds
- inconsistent use
- degradation of intended benefit
3.7 Monitoring — detection of deviation
Monitoring is not synonymous with reporting.
Effective monitoring requires:
- leading indicators
- visibility of near misses
- detection of weak signals
Lag indicators, such as adverse events, represent late-stage failure.
Research consistently shows that organisations often possess early warning signals but fail to act on them (Power, 2009).
3.8 Adaptation — maintaining system integrity
AI systems evolve.
Clinical environments change.
Data distributions shift.
Boards must ensure:
- mechanisms for recalibration
- clear escalation pathways
- authority to pause or withdraw systems
Without adaptation:
Systems drift beyond their safe operating limits.
4. Failure patterns in AI strategy governance
Across industries, failure patterns are consistent.
They include:
- unclear problem definition
- overestimation of capability
- underestimation of system complexity
- inadequate monitoring
- suppression of negative signals
In healthcare, these failures are amplified by:
- patient vulnerability
- clinical consequences of error
- complexity of care delivery
Importantly:
Organisations rarely fail due to absence of information.
They fail due to inability to interpret and act on it in time.
5. Risk appetite as a system constraint
Risk appetite must be embedded across all stages of the governance system.
It defines:
- acceptable error thresholds
- tolerance for uncertainty
- escalation triggers
In healthcare, this is inherently linked to:
tolerance for patient harm
Guidance from global institutions emphasises that safety, accountability, and human oversight must remain central to AI deployment (WHO, 2021).
If risk appetite is:
- undefined → decisions become inconsistent
- unembedded → behaviour diverges from intent
- unmonitored → drift occurs unnoticed
6. The role of culture in AI governance
Culture determines:
- whether concerns are raised
- whether signals are preserved
- whether escalation occurs
Psychological safety has been identified as a key factor in enabling organisations to detect and respond to risk (Edmondson, 2018).
In the context of AI:
- complexity increases uncertainty
- opacity increases reliance on trust
- pressure increases likelihood of silence
Accordingly:
Culture is not a soft factor.
It is a control mechanism for risk visibility.
7. Implications for boards
Boards must transition from:
- approving technology
to - governing system integrity
This requires:
- continuous engagement across the strategy lifecycle
- interrogation of assumptions
- demand for unfiltered information
- focus on leading indicators
Critically:
Governance effectiveness is determined not by the quality of reports, but by the quality of signals received and acted upon.
8. Conclusion
AI in healthcare introduces significant opportunity.
It also introduces new forms of risk.
These risks are not primarily technical.
They are systemic.
The safety and effectiveness of AI are determined by the integrity of the governance system surrounding it.
Boards that focus solely on approval and compliance will remain exposed.
Boards that govern:
- decision quality
- signal integrity
- system adaptability
will be positioned to manage both opportunity and risk.
🔻 Final statement
AI does not make healthcare safer by default.
It makes governance more consequential.
📚 Harvard References
Australian Institute of Company Directors (AICD) 2019, Director Tools: Risk Oversight and Governance, AICD, Sydney.
Australian Commission on Safety and Quality in Health Care (ACSQHC) 2025, AI Clinical Use Guide – Guidance for clinicians, ACSQHC, Sydney.
Edmondson, A.C. 2018, The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth, Wiley, Hoboken.
Food and Drug Administration (FDA) 2021, Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device Action Plan, FDA, Silver Spring.
National Institute of Standards and Technology (NIST) 2023, Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1, U.S. Department of Commerce, Gaithersburg.
Power, M. 2009, The Risk Management of Nothing, Accounting, Organizations and Society, vol. 34, no. 6–7, pp. 849–855.
World Health Organization (WHO) 2021, Ethics and governance of artificial intelligence for health, WHO, Geneva.
World Health Organization (WHO) 2025, Ethics and governance of artificial intelligence for health: guidance on large multi-modal models, WHO, Geneva.