Publications Index by ISI

An overview of the Institute for Systems Integrity’s published work, highlighting foundational papers and thematic groupings.

Publications Index by ISI
This section brings together our published work — essays, position pieces, and research-informed commentary — focused on system integrity, governance, and decision-making in complex environments.
Our publications examine how systems behave under pressure: where incentives misalign, where accountability fragments, and where design choices shape real-world outcomes in health, technology, and public institutions.

© 2026 Institute for Systems Integrity. All rights reserved.
Content may be quoted or referenced with attribution.
Commercial reproduction requires written permission.

Editorial commitments:

Clarity over volume
We publish selectively, prioritising substance, evidence, and coherence over frequency.
Practice-informed insight
Our work is grounded in lived experience — connecting frontline realities with board-level, regulatory, and policy contexts.
Respect for complexity
We resist oversimplification. Many of the challenges facing institutions today cannot be reduced to slogans or single-factor explanations.

Foundations Article #1

Anchor papers that establish the Institute’s core lenses and conceptual frameworks

Decision-Making Under System Stress: Why Good People Make Predictably Weaker Decisions — and What Integrity Requires
Why capable people make weaker decisions under institutional stress — and what integrity requires when systems are strained.

Latest Publication:

How Boards Set Good Culture — Including the Risks of Tribalism and Bias : Culture as a control system for truth, behaviour, and risk

Culture is often treated as a soft organisational concept. This article reframes culture as a governance control system — one that determines whether truth moves, risk is surfaced, and behaviour aligns with intent. It examines how boards shape culture through five governance levers, and why tribalism, hierarchy and bias can distort the signals boards rely on.

How Boards Set Good Culture | Institute for Systems Integrity | ISI
Boards do not control behaviour directly. They shape the conditions under which behaviour occurs — including whose voice is heard, how risk is escalated, and whether truth reaches the board intact.

From Human-in-the-Loop to Human-with-Agency : Why AI Oversight Fails When Humans Are Present but Powerless

Artificial intelligence is rapidly entering healthcare, governance, operations, and critical infrastructure. Yet many organisations still mistake human presence for meaningful oversight. In this paper, the Institute for Systems Integrity introduces the Human Agency Framework — a governance model examining the difference between symbolic human involvement and protected human judgement under pressure. The paper argues that AI oversight fails not when humans are absent, but when they are present without the authority, information, or organisational support to intervene when it matters most.

From Human-in-the-Loop to Human-with-Agency | Institute for Systems Integrity | ISI
“Human-in-the-loop” AI often creates the illusion of oversight without real control. This ISI paper introduces the Human Agency Framework, examining why meaningful AI governance depends on preserving human judgement, authority, and intervention capability under pressure.

Employee Assistance Programmes and the Illusion of Care : Why Underutilisation Is a System Signal, Not a Workforce Failure

Employee Assistance Programmes are often positioned as evidence of organisational care. Yet across industries, utilisation remains consistently low. This paper challenges the assumption that underuse reflects employee reluctance, arguing instead that it signals a deeper systems issue. When organisations address distress downstream—through support services—while leaving upstream drivers such as workload, culture, and leadership unchanged, a gap emerges between stated intent and lived experience. This paper reframes EAPs not as solutions, but as diagnostic indicators of system strain, and explores the governance implications of mistaking availability of support for presence of care.

Employee Assistance Programmes and the Illusion of Care | ISI | Institute for Systems Integrity
Employee Assistance Programmes are widely promoted as workplace care, yet remain underused. This paper argues low uptake is not a workforce failure, but a system signal—revealing misalignment between organisational intent and the conditions that create employee distress.

The Capability Trap in Healthcare Why Systems Get Worse While Everyone Works Harder

Healthcare systems rarely fail from a lack of effort. More often, they fail because effort is used to compensate for flawed system design. In The Capability Trap in Healthcare, the Institute for Systems Integrity (ISI) examines how organisations respond to performance pressure by increasing workload, targets, and urgency—while unintentionally eroding the very capability required to improve. This paper reframes the capability trap as a governance risk: a reinforcing loop where short-term performance gains come at the cost of long-term system strength, learning, and resilience. For boards and leaders, the challenge is not to drive more effort, but to recognise when effort has become a substitute for design—and to intervene before decline is locked in.

The Capability Trap in Healthcare | Why Systems Get Worse When We Work Harder | ISI | Institute for Systems Integrity
Healthcare does not fail because people lack effort. It fails when effort substitutes for system design. The capability trap reveals how increasing pressure erodes capacity, weakens performance, and creates a self-reinforcing cycle of decline

From Pitch Deck to Reality: Where Startup Systems Break When Growth Begins

Startups are often judged by the strength of their ideas. In practice, they succeed or fail based on the strength of their systems. This paper examines a critical transition point in venture development—the shift from narrative to execution—where assumptions are tested against operational reality. Drawing on governance, entrepreneurship, and real-world scaling dynamics, the Institute for Systems Integrity (ISI) explores how signal distortion, supply chain constraints, and incentive misalignment emerge as growth accelerates. The result is a clear insight: failure is rarely sudden. It is the cumulative consequence of systems that lose the ability to see and respond to truth. This paper introduces a practical framework for preserving system integrity as startups scale, ensuring that speed is matched with accountability, visibility, and disciplined execution.

From Pitch Deck to Reality: Startup Governance and System Integrity | ISI | Institute for Systems Integrity
Startups don’t fail at the idea stage. They fail when execution exposes the gap between narrative and reality. This ISI paper examines how system integrity breaks down as ventures scale—and what governance must do to keep truth intact.

When Choice Disappears: Why Healthcare Exposes the Limits of Market Logic

Healthcare is frequently debated as an ideological contest between markets and government, yet the real test comes when people are vulnerable, frightened, and in urgent need of care. In those moments, the normal conditions that make markets function—time, information, bargaining power, and freedom to choose—often disappear. When Choice Disappears: Why Healthcare Exposes the Limits of Market Logic examines why healthcare cannot be understood as an ordinary consumer market, why most serious nations build protective guardrails, and how resilient systems combine innovation, competition, access, and human dignity through smarter governance and design.

When Choice Disappears | Why Healthcare Exposes the Limits of Market Logic | ISI
A systems-level analysis of why healthcare behaves differently from ordinary markets, especially when urgency, dependence, and information asymmetry remove consumer choice

From Externalities to Systemic Risk: How Sustainability Entered the Logic of Finance

For decades, environmental and social harms were treated as externalities—costs absorbed by communities, ecosystems, and future generations rather than reflected in markets. That era is ending. Climate disruption, governance failures, supply-chain fragility, labour instability, and transition risk are increasingly entering financial systems through asset pricing, lending decisions, insurance markets, and board oversight. This paper examines one of the defining governance shifts of our time: how sustainability moved from the language of responsibility into the logic of finance

From Externalities to Systemic Risk | Sustainability Risk & Finance | ISI | Institute for Systems Integrity
For decades environmental and social harms were treated as externalities. That boundary is breaking down. Climate shocks, governance failures, supply-chain fragility, and transition risk are increasingly becoming financial risks shaping boards, markets, and capital allocation.

Wearables Are Not Personal Devices: They Are Vulnerable Points Inside Critical Systems

Wearables are commonly viewed as personal devices — watches, trackers, glasses, and sensors used by individuals. But in connected healthcare and enterprise environments, they are something more consequential: vulnerable points inside larger systems. In this new paper, the Institute for Systems Integrity explores how always-on, human-attached devices create governance blind spots, behavioural intelligence risks, and new pathways of system exposure that most organisations have yet to properly recognise.

Wearables Are Not Personal Devices: They Are Vulnerable Points Inside Critical Systems |ISI| Institute for Systems Integrity
Wearables are often treated as personal technology. In reality, they are connected vulnerable points embedded within critical systems. This ISI paper examines how smartwatches, trackers, and connected devices create hidden governance exposures across healthcare and enterprise environments.

If the Right Clinician Is Not in the Room, Systems Drift: Clinical Signal, Governance Design, and the Risk of Functional Absence

Healthcare governance often assumes that clinical representation is enough. It isn’t. Many organisations have a clinician at the table, yet still make decisions detached from operational reality. The issue is not presence — it is whether the rightclinician perspective is meaningfully shaping judgement, risk calibration, and strategic choices. When authentic clinical signal is absent, governance does not pause; it continues with weaker visibility, reduced challenge, and rising drift. This article examines how symbolic representation, flawed governance design, and functional clinical absence can quietly erode decision quality across healthcare systems.

If the Right Clinician Is Not in the Room, Systems Drift | ISI | Institute for Systems Integrity
Healthcare systems rarely fail because decisions stop. They fail because decisions continue with weakened clinical signal. This article explores governance blindness, symbolic representation, and why systems drift when the right clinician perspective is absent.

The Approval Illusion: Why Boards Must Govern AI as a Living Clinical Risk System — and Why Vendors Must Share the Burden of Harm

Most boards believe that approving artificial intelligence is an act of governance. It is not. It is the point at which risk enters the system. In healthcare, AI does not behave like a static tool but as a dynamic, context-dependent influence on clinical decision-making — capable of drift, degradation, and unintended consequence under real-world conditions. Yet governance models remain anchored in procurement logic, while accountability for failure sits with clinicians at the point of care. This paper examines the “approval illusion” — the structural gap between decision authority and risk exposure — and argues that boards must shift from approving technology to governing decision quality, with vendors sharing responsibility for the clinical risks their systems create.

The Approval Illusion: Governing AI as a Clinical Risk System | ISI | Institute for Systems Integrity
AI in healthcare is not a product to approve but a system to govern. This ISI paper examines the failure of “approve and go” models and argues that boards must oversee decision quality and require vendors to share clinical risk alongside clinicians.

The Strategy Governance System for AI in Healthcare: Why boards must govern decision quality — not just approve technology

Artificial intelligence is rapidly reshaping healthcare, but its risks are not primarily technical — they are systemic. This paper introduces the Strategy Governance System for AI in Healthcare, reframing governance as a continuous process that extends beyond approval into decision quality, signal integrity, and adaptive oversight. It argues that boards do not manage AI risk by reviewing dashboards or endorsing strategy alone, but by governing how decisions are formed, tested, executed, and monitored over time. In doing so, it highlights a critical shift: from overseeing technology to safeguarding the integrity of the systems in which that technology operates.

Strategy Governance System for AI in Healthcare | ISI | Institute for Systems Integrity
AI in healthcare is not a technology problem — it is a governance system challenge. This paper introduces the Strategy Governance System, showing why boards must move beyond approval to govern decision quality, signal integrity, and adaptive oversight.

Micromanagement as a Governance Failure Mode : Why control concentrates risk instead of reducing it

Micromanagement is often seen as a leadership problem. This paper reframes it as a governance failure mode. When decision-making concentrates at the top, organisations lose capability, slow down under pressure, and become structurally fragile. This ISI paper examines how control, when misapplied, undermines system integrity.

Micromanagement as a Governance Failure Model | ISI | Institute for Systems Integrity
Micromanagement is not a leadership flaw. It is a governance failure. When control concentrates decision-making, organisations become fragile. This paper examines how micromanagement signals breakdowns in decision architecture, capability, and system integrity.

Walking the Floor as a Governance Mechanism

Most governance systems rely on what is reported—dashboards, metrics, and formal updates. But risk does not begin in reports. It begins in conditions: how work is performed, how pressure is managed, and whether concerns are raised or absorbed. This paper examines how “walking the floor” can be understood not as leadership visibility, but as a governance sensing mechanism—one that improves signal integrity, strengthens cultural oversight, and enables earlier detection of system stress before it becomes visible in formal reporting.

Walking the Floor as a Governance Mechanism | ISI | Institute for Systems Integrity
Boards rely on dashboards—but risk emerges before metrics move. This paper examines how structured floor engagement functions as a governance sensing system, improving signal integrity, cultural oversight, and early risk detection.

Absenteeism in Healthcare: From Workforce Symptom to System Signal

Absenteeism in healthcare is often viewed as a workforce issue requiring operational solutions. This paper reframes it as an early signal of system strain. When examined through a systems integrity lens, patterns of absence reveal deeper pressures in workload design, patient flow, organisational culture and leadership response. By shifting the focus from individual behaviour to system conditions, this analysis highlights why absenteeism matters not only for workforce sustainability, but for the integrity of clinical decision-making and the safety of care delivery.

Absenteeism in Healthcare: A Systems Signal, Not Just a Workforce Issue | ISI | Institute for Systems Integrity
Absenteeism in healthcare is often treated as a workforce issue. This ISI analysis reframes it as an early signal of system strain — revealing deeper pressures in workforce design, operational flow and organisational culture with direct implications for patient safety.

Access Without Interpretation: Why Australia’s Digital Health Reform Risks Distorting Clinical Decision-Making

Healthcare systems are undergoing a quiet but profound shift. As patients gain faster access to pathology and imaging results — often before clinician review — the traditional flow of clinical decision-making is being reconfigured. What appears as a transparency reform is, in reality, a structural change in how information moves, is interpreted, and ultimately acted upon. This article examines why access alone is not enough, and why the next frontier of healthcare governance lies in managing how meaning is constructed between data and decision.

Access Without Interpretation: Why Australia’s Digital Health Reform Risks Distorting Clinical Decision-Making | ISI | Institute for Systems Integrity
Australia’s digital health reforms are accelerating patient access to results. But access without interpretation creates new risks. This article explores how shifting information pathways can distort clinical decision-making — and why interpretation governance is now essential.

Constructive Scepticism as a Governance Control Function: Why boards must treat scepticism as a system requirement — not a personality trait

Constructive scepticism is widely described as a quality directors should bring to the boardroom. This paper reframes it as something more fundamental. It argues that scepticism is not simply a mindset, but a governance control function shaped by how information flows, how decisions are structured, and how oversight is exercised. When these conditions weaken, scepticism does not disappear — it becomes ineffective. Understanding this shift is critical to explaining why boards can remain compliant while gradually losing control under pressure.

Constructive Scepticism as a Governance Control Function | Institute for Systems Integrity| ISI
Constructive scepticism is not just a mindset. This ISI paper shows why boards must treat it as a governance control function embedded in information flow, decision-making, and oversight—not merely a personal trait.

When Work Never Settles: A Governance Blind Spot Hiding in Plain Sight

Work does not feel endless because of the hours. It feels endless because it never settles.
In this paper, the Institute for Systems Integrity examines how modern work systems — defined by constant interruption, fragmented attention, and blurred boundaries — are not just productivity challenges, but governance risks. When work cannot stabilise, judgment compresses, visibility weakens, and decision quality degrades. This is not a failure of individuals. It is a failure of system design. This paper reframes the issue through a governance lens, outlining how organisations can move from interruption-driven activity to systems that protect thinking, preserve judgment, and enable sustainable performance.

When Work Never Settles: A Governance Blind Spot Hiding in Plain Sight |ISI| Institute for Systems Integrity
Work doesn’t feel endless because of the hours. It feels endless because it never settles. This ISI paper reveals how interruption-driven systems compress judgment, distort visibility, and quietly degrade governance.

Shock-Resilient Entrepreneurship: A Systems Integrity Playbook for Small Business in an Era of Global Disruption

In an era defined by geopolitical instability, energy volatility, and cascading economic shocks, small businesses are increasingly operating on the edge of uncertainty. Shock-Resilient Entrepreneurship reframes crisis not as an isolated event, but as a systemic stress test—one that exposes hidden dependencies, weak signals, and fragile decision structures. This ISI playbook brings together evidence, strategy, and systems thinking to help entrepreneurs move beyond reactive survival, and instead build organisations that can absorb disruption, adapt with clarity, and sustain performance under pressure.

Shock-Resilient Entrepreneurship - Playbook for Small Business |Shock-Resilient Entrepreneurship | ISI | Institute for Systems Integrity
In an era of geopolitical instability and energy shocks, small businesses face cascading risks. This ISI playbook shows how to strengthen cash discipline, pricing, supply chains, and decision systems to navigate disruption with clarity and resilience.

AI Managers vs People Managers: Governance Lessons from Human and Machine Failure Modes

As artificial intelligence shifts from experimentation into operational reality, organisations are confronting a new governance challenge: they are no longer managing only people, but also autonomous systems with fundamentally different behaviours and risks. This article examines why managing humans and managing AI require distinct control systems—and what boards must now oversee to ensure safety, reliability, and accountability.

AI Managers vs People Managers: Governance Lessons for the AI Era | ISI | Institute for Systems Integrity
As AI agents move into operational workflows, organisations face a new governance challenge: managing humans and managing autonomous systems are different problems. This article explains why failure modes, controls, and accountability must be designed differently—and what boards must now oversee AI.

Tone at the Top, Drift in the System: Why ethical drift begins when leadership signals are inconsistent, tolerated, or ignored

Most organisations don’t fail because of a single unethical decision—they drift. Tone at the Top, Drift in the System examines how culture is shaped not by stated values, but by the signals leaders send through what they reward, ignore, and tolerate. Drawing on governance research and real-world patterns, this article explores how small inconsistencies accumulate into systemic risk, and why boards must look beyond frameworks to the behaviours that are quietly allowed to continue.

Tone at the Top and Ethical Drift | Institute for Systems Integrity | ISI
Ethical failure rarely begins with misconduct. It begins with inconsistency. This article explores how leadership signals — especially what is tolerated — shape organisational culture and drive systemic drift.

🏛️ When AI Writes the Discharge Summary: A Governance, Duty, and Systems Integrity Challenge

As generative AI tools move rapidly into clinical workflows, their use in discharge summaries and medication instructions is often framed as a productivity gain. Emerging evidence, however, suggests a more complex reality. While AI can produce outputs that are complete, fluent, and consistent, safety-critical risks — including hallucinations, incorrect instructions, and uneven performance across patient groups — persist. This article reframes AI-generated discharge communication not as a documentation tool, but as a governance and systems integrity challenge requiring board-level oversight, clear accountability, and robust control design.

AI Discharge Summaries: Governance, Risk, and Systems Integrity in Healthcare |ISI | Institute for Systems Integrity
Generative AI can produce discharge summaries with impressive completeness and fluency. Emerging evidence shows persistent safety risks, reframing discharge automation as a governance and systems integrity challenge rather than a productivity solution.

Adding Value Through Ethical Leadership: Why Board Behaviour Shapes System Integrity

Ethics is often discussed in governance as culture, values, or compliance. But within complex organisations, ethical leadership functions as something more structural. Board behaviour shapes decision environments, influences how risks are surfaced, and determines whether integrity is reinforced or gradually eroded. In this article, the Institute for Systems Integrity examines why ethical discipline at the board level is not symbolic governance, but a core mechanism through which systems remain credible, resilient, and effective.

Ethical Leadership in Governance: Why Board Behaviour Shapes System Integrity | ISI | Institute for Systems Integrity
Ethical leadership is often treated as culture or compliance. In reality, board behaviour functions as a structural governance control layer shaping decision quality, risk visibility, and organisational trust. This article explores how ethical discipline at board level protects system integrity.

Carewashing: When “We Care” Becomes Organisational Self-Deception

Organisations increasingly speak the language of employee wellbeing. Leadership messaging emphasises that people matter, while wellbeing initiatives, resilience programs, and support services become more visible across workplaces. Yet many employees continue to experience chronic workload pressure, poorly managed organisational change, and inconsistent decision-making. This growing gap between organisational messaging and the lived experience of work is increasingly described as carewashing. This article examines how organisational expressions of care can unintentionally mask structural drivers of psychosocial risk and explores why genuine organisational care ultimately depends not on rhetoric, but on the design of work and the systems that protect people within it.

Carewashing: When “We Care” Becomes Organisational Self-Deception | Institute for Systems Integrity | ISI
Carewashing occurs when organisations signal care for employee wellbeing while the structural conditions shaping work remain unchanged. When wellbeing messaging replaces real system redesign, trust erodes, and psychosocial hazards persist beneath the language of care.

Water Governance in Healthcare Systems: A Planetary Boundary and Supply Chain Risk Analysis

Freshwater systems are a critical foundation of planetary stability, yet water governance rarely features in healthcare sustainability strategies. Healthcare depends heavily on reliable water for clinical care, sanitation, pharmaceuticals, and infrastructure, while global supply chains embed additional water risks. As climate change and over-extraction place increasing pressure on freshwater resources, healthcare systems face growing operational and systemic vulnerabilities. This paper examines water governance through the lens of planetary boundaries and supply chain risk, arguing that resilient health systems require stronger integration of freshwater risk into institutional governance.

Water Governance in Healthcare Systems | ISI | Institute for Systems Integrity
Freshwater systems are under increasing global stress, yet healthcare sustainability strategies rarely address water governance. This article examines water as a planetary boundary risk and explores how healthcare supply chains and infrastructure depend on stable freshwater systems.

AI as a Systems Stress Test 

Artificial intelligence is often discussed as a breakthrough technology capable of transforming healthcare and complex organisations. Yet as AI moves from experimentation to real-world deployment, a different reality is emerging. Rather than simply improving performance, AI frequently exposes deeper weaknesses in the systems it enters. In this paper, the Institute for Systems Integrity (ISI) examines why artificial intelligence often functions as a systems stress test—revealing hidden fragilities in data ecosystems, operational workflows, governance structures, and organisational readiness.

AI as a Systems Stress Test | Institute for Systems Integrity | ISI
Artificial intelligence is often presented as a transformative solution. In practice, AI frequently acts as a stress test for organisational systems. When deployed in complex environments such as healthcare, AI exposes weaknesses in data integrity, workflows, governance, and system readiness.

Low-Recoverability Plastics and the Governance Logic of Targeted Bans

Low-recoverability plastics represent a structural governance challenge rather than simply a waste-management issue. When products are designed with near-zero probability of recovery and high environmental leakage, downstream solutions such as recycling and clean-up become economically and operationally inadequate. In these cases, the core failure lies in product architecture and incentive design. Targeted bans, therefore, function not as symbolic environmental actions but as upstream governance instruments — correcting persistent design failures and preventing predictable harm before it enters circulation. From a systems integrity perspective, such measures reflect a transition from managing waste to managing risk at the level of design.

Low-Recoverability Plastics: Governance Logic of Bans | ISI | Institute for Systems Integrity
Small plastic items such as soy sauce containers expose a structural weakness in waste systems. This paper examines targeted bans as upstream governance controls that eliminate predictable failure points and restore alignment between product design and environmental system capacity.

Diversity as an Integrity Mechanism in Board Decision Systems

Diversity is commonly framed as representation. In complex governance environments shaped by AI disruption, sustainability transition, and systemic risk, it performs a far more critical function. In Diversity as an Integrity Mechanism in Board Decision Systems, ISI reframes diversity as a structural stabiliser within board architecture — strengthening weak-signal detection, ethical contestability, and decision resilience under stress. Perspective breadth is not symbolic. It is protective.

Diversity as an Integrity Mechanism in Board Decision Systems | ISI | Institute for Systems Integrity
Diversity is commonly framed as representation. From a systems-integrity perspective, it functions as a governance stabiliser. In complex domains such as AI and sustainability, diversity strengthens weak-signal detection, ethical contestability and decision resilience.

Bed Block as a System Integrity Failure - Flow Breakdown at the Acute–Rehabilitation Boundary

Hospitals described as “full” are often signalling a deeper structural issue. This paper examines bed block not as a simple shortage of beds, but as a breakdown of flow integrity at the acute–rehabilitation boundary. Drawing on systems integrity frameworks and published health system evidence, it explores how capacity constraints, funding design, and fragmented accountability can combine under sustained stress to produce predictable congestion — even in well-intentioned systems.

Bed Block as a System Integrity Failure | ISI | Institute for Systems Integrity
When hospitals are described as “full,” the underlying failure has often already occurred. This ISI analysis examines bed block not as a bed shortage problem, but as a breakdown of flow integrity at the acute–rehabilitation boundary under sustained system stress

Circularity Under Clinical Constraints: Why recycled material claims do not guarantee circular outcomes in healthcare

Healthcare increasingly adopts recycled-content materials in the name of sustainability. But recycled inputs do not guarantee circular outcomes. Clinical safety, contamination risks, regulation, and waste pathways often reshape what is truly recoverable. This paper examines the gap between circularity claims and system-level realities.

Circularity Under Clinical Constraints | ISI | Institute for Systems Integrity
Healthcare increasingly adopts recycled-content materials in the name of sustainability. But recycled inputs do not guarantee circular outcomes. Clinical safety, contamination risks, regulation, and waste pathways often reshape the lifecycle reality.

Digital Transition Risk: Why Non-Tech Boards Inherit Tech-Grade Exposure

Digital transformation is often framed as an operational or technological upgrade. This paper examines a less discussed reality: how digital dependency fundamentally reshapes enterprise risk. As organisations adopt cloud systems, electronic records, vendor-managed infrastructure, and AI-enabled tools, boards inherit technology-grade exposure irrespective of industry classification.

Digital Transition Risk and Board Governance | ISI | Institute for systems integrity
Digital modernisation is often treated as an operational upgrade. In reality, it transforms how organisational risk behaves. This paper examines why boards of traditionally structured organisations now inherit technology-grade exposure across cybersecurity, data, vendors, and AI.

Mentoring as Infrastructure: Learning, Power, and Risk in Organisational Design

Mentoring is widely treated as goodwill.
In practice, it behaves like infrastructure.
When designed well, it accelerates learning and strengthens judgment. When left to intention alone, it can narrow thinking, create dependency, obscure power, and amplify risk. This paper reframes mentoring as a learning control system, outlining benefits, predictable failure modes, and the safeguards required to protect judgment, independence, and decision quality.

Mentoring as Infrastructure: Learning, Power & Risk | Institute for Systems Integrity | ISI
Mentoring is often framed as goodwill, yet it functions more like infrastructure. When designed well, it strengthens judgement and learning. When left to intention alone, it can narrow thinking, create dependence, and quietly amplify organisational risk.

The Residual Risk Budget: Why “Net Zero” Still Requires Governance

Net zero is often described as a destination — emissions reduced, offsets applied, balance achieved. But this framing can obscure a critical governance reality. Even under credible net-zero pathways, residual emissions, residual harms, and residual uncertainties remain. They do not disappear; they are redistributed across systems, stakeholders, and time. This paper introduces the concept of the Residual Risk Budget — the remaining exposure that must be made visible, owned, and adaptively governed. Without this discipline, net zero risks become an accounting construct that masks ethical trade-offs and accelerates integrity drift.

Residual Risk Budget: Net Zero Still Needs Governance | Institute for Systems Integrity | ISI
Net zero is often framed as an endpoint. In reality, residual emissions, harms, and uncertainties persist. ISI introduces the Residual Risk Budget — a systems integrity lens that makes remaining exposure visible, owned, and governable across boards, regulators, and institutions.

Beyond Legality: Why Boards Must Ask “Should We?”

In contemporary governance, legality is often treated as the primary decision threshold. Yet many organisational failures arise not from illegal actions, but from decisions that were lawful, compliant, and ultimately indefensible. This ISI paper examines the critical distinction between “Can we?” and “Should we?”, arguing that resilient boards must govern beyond permission alone and embed integrity as a core decision discipline.

Beyond Legality: Why Boards Must Ask “Should We?” |Institute for Systems Integrity| ISI
Governance failures rarely stem from illegality. More often, they arise from lawful, compliant decisions that prove strategically or ethically unsound. This ISI paper explores why boards must move beyond “Can we?” and institutionalise the discipline of asking “Should we?”.

Governing Wicked Problems in Healthcare: An Integrity Architecture for AI, Sustainability, and Net Zero

Healthcare systems are entering a period of unprecedented complexity. Artificial intelligence, sustainability pressures, and net zero commitments are converging within institutions not originally designed to absorb this pace and scale of change. This paper argues that these challenges are best understood not as technical or compliance problems, but as wicked problems requiring a fundamentally different governance response.

Governing Wicked Problems in Healthcare | Institute for systems Integrity \ISI
Healthcare AI, sustainability, and net zero are not technical challenges with tidy solutions. They are wicked problems—complex, evolving, and resistant to linear control. This paper sets out an integrity-based governance architecture for holding risk, accountability, and adaptation under pressure.

Beyond AI Compliance: Designing Integrity at Scale 

This paper examines why most AI failures do not begin with flawed technology, but with governance systems that prioritise reassurance over judgment. As AI accelerates decision-making across complex organisations, traditional compliance frameworks struggle to detect drift, surface doubt, or correct harm before it becomes visible. This paper sets out a systems-level approach to AI governance—one that treats integrity as an architectural property, designed deliberately into authority, accountability, and the capacity to pause under pressure.

Beyond AI Compliance: Designing Integrity at Scale
Dr Alwin Tan, MBBS, FRACS, EMBA (University of Melbourne), AI in Healthcare (Harvard Medical School) Senior Surgeon | Governance Leader | HealthTech Co-founder |Harvard Medical School — AI in Healthcare | Australian Institute of Company Directors — GAICD candidate University of Oxford — Sustainable Enterprise Institute for Systems Integrity (ISI) Abstract Healthcare AI governance has entered

Governing AI in Healthcare: A Practical Integrity Architecture

This paper sets out why AI governance most often fails after deployment, not at approval. In real clinical environments, performance, safety, and accountability are shaped by workflow, staffing, training, and local judgment—not the model alone. This paper presents a practical integrity architecture for healthcare AI: designed to detect drift, preserve clinical judgment, and enable correction under operational pressure, before harm becomes visible to patients or boards.

Governing AI in Healthcare: A Practical Integrity Architecture
AI governance does not fail at approval. It fails when drift, workload, and accountability pressures appear after deployment. This paper outlines a practical integrity architecture for governing AI in real clinical systems.

The Systems Integrity Toolkit — Phase I
Why most integrity failures are not visible in time — and how systems allow harm to accumulate before anyone intervenes

Foundation Toolkit #1

This paper introduces the Systems Integrity Toolkit — Phase I, a governance architecture that consolidates ISI’s foundational research into a practical framework for identifying integrity risk before outcomes harden, showing how system stress, decision degradation, governance mediation, and failure dynamics interact long before harm becomes visible.

Systems Integrity Toolkit – Phase 1 | Institute for Systems Integrity
The Systems Integrity Toolkit – Phase I introduces a governance architecture for understanding how integrity fails under system stress and how organisations can intervene before harm occurs.

Most systems don’t fail because they can’t see the problem.
They fail because they can’t change the things they’ve learned to protect.

As a companion to the Systems Integrity Toolkit — Phase I, this paper examines why integrity risks persist even after they become visible. It explores systemic refusal — the quiet protection of certain variables from change — and shows how governance under pressure can stabilise harm rather than correct it. Together, the Toolkit and this analysis describe a familiar condition in complex organisations: clarity without permission to change.

What Systems Refuse to Change | Institute for Systems Integrity | ISI
This paper examines why systems resist change under pressure and how structurally protected variables shape governance behaviour and outcomes.

The Failure Taxonomy: How Harm Emerges Without Malice - Why most disasters are not caused by bad people — but by predictable system drift

Foundation Article #4

This paper introduces the Failure Taxonomy — a structural model showing how harm accumulates in complex systems through drift, signal loss, and accountability inversion, without anyone intending it.

The Failure Taxonomy | Institute for Systems Integrity | ISI
This paper introduces the Failure Taxonomy — a structural model showing how harm accumulates in complex systems through drift, signal loss, and accountability inversion, without anyone intending it.

The ISI Pause Principle explains why governance fails when reaction replaces reflection. Under pressure, systems that remove space between signal and response degrade judgment, suppress warning signs, and invert accountability. Pause is not a leadership trait — it is a governance control condition.

The Pause Principle: Governance Failure Under Pressure | ISI
A systems analysis of how urgency compresses judgment, suppresses signals, and accelerates governance failure — and why pause must be designed as a control condition.

Integrity is a System Property. Why outcomes reflect design, not intent

Foundation Article # 3

Integrity is often treated as a personal trait. This paper shows why it is better understood as a system property — shaped by how authority, accountability, and information are aligned under stress, and why outcomes reflect design rather than intent.

Integrity Is a System Property | Institute for Systems Integrity| ISI
Integrity is often treated as a personal trait. This paper shows why it is better understood as a system property — shaped by how authority, accountability, and information are aligned under stress, and why outcomes reflect design rather than intent.

When the Constitution Becomes a Weapon
How governance drift turns compliance into a liability under system stress

This paper examines how constitutions, delegations, and oversight structures can remain legally intact while drifting out of alignment with real decision-making, allowing compliance to persist even as governance control erodes.

When the Constitution Becomes a Weapon | Institute for Systems Integrity
Governance failure rarely begins with misconduct. It begins when constitutions, delegations, decisions, and oversight drift out of alignment under pressure. This paper explains how compliance can persist even as integrity quietly erodes.

Why Oversight Fails Under Pressure

How system stress distorts visibility, weakens governance, and produces predictable outcomes

Foundation Article # 3

Governance mechanisms designed for stable conditions often lose sensitivity under sustained stress.
Signals distort. Drift normalises. Oversight becomes selectively blind.

This paper examines why failures emerge quietly — and why outcomes are best understood as properties of system design, not individual intent.

Why Oversight Fails Under Pressure | Institute for Systems Integrity
Governance systems are designed for stability. Under sustained stress, visibility distorts, oversight becomes selectively blind, drift normalises, and outcomes become predictable.


When Resilience Appears, Governance Has Already Failed. Why frontline heroics are a warning signal — not a success story

A companion paper to Why Oversight Fails Under Pressure examining how human resilience conceals system failure.

When Resilience Appears, Governance Has Already Failed | ISI
When frontline teams keep systems functioning through heroics and sacrifice, governance has already failed. This ISI paper explains how resilience hides systemic risk.

Themes

Foundation Papers

Link:

Browse by theme

In addition to our foundational work, this section includes

Essays and perspective pieces
Governance and leadership reflections
Policy-relevant analysis
Research-informed commentary
Invited contributions from practitioners and scholars

Current publications appear below and will be updated as new work is released.

The Failure Taxonomy | Institute for Systems Integrity | ISI
This paper introduces the Failure Taxonomy — a structural model showing how harm accumulates in complex systems through drift, signal loss, and accountability inversion, without anyone intending it.