All Our Publications by ISI

A curated index of ISI’s public work—foundational papers, frameworks, and institute updates on governance failure, system drift, accountability, and decision-making under stress.

All Our Publications by ISI

All Publications

This page brings together the Institute for Systems Integrity’s published work — papers, frameworks, and updates examining how systems behave under pressure and how integrity erodes through governance and design.

We publish selectively, focusing on decision-making, accountability, and system drift in complex institutional environments.

© 2026 Institute for Systems Integrity. All rights reserved.
Content may be quoted or referenced with attribution.
Commercial reproduction requires written permission.

This week in Publication

Constructive Scepticism as a Governance Control Function: Why boards must treat scepticism as a system requirement — not a personality trait

Constructive scepticism is widely described as a quality directors should bring to the boardroom. This paper reframes it as something more fundamental. It argues that scepticism is not simply a mindset, but a governance control function shaped by how information flows, how decisions are structured, and how oversight is exercised. When these conditions weaken, scepticism does not disappear — it becomes ineffective. Understanding this shift is critical to explaining why boards can remain compliant while gradually losing control under pressure.

Constructive Scepticism as a Governance Control Function | Institute for Systems Integrity| ISI
Constructive scepticism is not just a mindset. This ISI paper shows why boards must treat it as a governance control function embedded in information flow, decision-making, and oversight—not merely a personal trait.

When Work Never Settles: A Governance Blind Spot Hiding in Plain Sight

Work does not feel endless because of the hours. It feels endless because it never settles.
In this paper, the Institute for Systems Integrity examines how modern work systems — defined by constant interruption, fragmented attention, and blurred boundaries — are not just productivity challenges, but governance risks. When work cannot stabilise, judgment compresses, visibility weakens, and decision quality degrades. This is not a failure of individuals. It is a failure of system design. This paper reframes the issue through a governance lens, outlining how organisations can move from interruption-driven activity to systems that protect thinking, preserve judgment, and enable sustainable performance.

When Work Never Settles: A Governance Blind Spot Hiding in Plain Sight |ISI| Institute for Systems Integrity
Work doesn’t feel endless because of the hours. It feels endless because it never settles. This ISI paper reveals how interruption-driven systems compress judgment, distort visibility, and quietly degrade governance.

Shock-Resilient Entrepreneurship: A Systems Integrity Playbook for Small Business in an Era of Global Disruption

In an era defined by geopolitical instability, energy volatility, and cascading economic shocks, small businesses are increasingly operating on the edge of uncertainty. Shock-Resilient Entrepreneurship reframes crisis not as an isolated event, but as a systemic stress test—one that exposes hidden dependencies, weak signals, and fragile decision structures. This ISI playbook brings together evidence, strategy, and systems thinking to help entrepreneurs move beyond reactive survival, and instead build organisations that can absorb disruption, adapt with clarity, and sustain performance under pressure.

Shock-Resilient Entrepreneurship - Playbook for Small Business |Shock-Resilient Entrepreneurship | ISI | Institute for Systems Integrity
In an era of geopolitical instability and energy shocks, small businesses face cascading risks. This ISI playbook shows how to strengthen cash discipline, pricing, supply chains, and decision systems to navigate disruption with clarity and resilience.

AI Managers vs People Managers: Governance Lessons from Human and Machine Failure Modes

As artificial intelligence shifts from experimentation into operational reality, organisations are confronting a new governance challenge: they are no longer managing only people, but also autonomous systems with fundamentally different behaviours and risks. This article examines why managing humans and managing AI require distinct control systems—and what boards must now oversee to ensure safety, reliability, and accountability.

AI Managers vs People Managers: Governance Lessons for the AI Era | ISI | Institute for Systems Integrity
As AI agents move into operational workflows, organisations face a new governance challenge: managing humans and managing autonomous systems are different problems. This article explains why failure modes, controls, and accountability must be designed differently—and what boards must now oversee AI.

Tone at the Top, Drift in the System: Why ethical drift begins when leadership signals are inconsistent, tolerated, or ignored

Most organisations don’t fail because of a single unethical decision—they drift. Tone at the Top, Drift in the System examines how culture is shaped not by stated values, but by the signals leaders send through what they reward, ignore, and tolerate. Drawing on governance research and real-world patterns, this article explores how small inconsistencies accumulate into systemic risk, and why boards must look beyond frameworks to the behaviours that are quietly allowed to continue.

Tone at the Top and Ethical Drift | Institute for Systems Integrity | ISI
Ethical failure rarely begins with misconduct. It begins with inconsistency. This article explores how leadership signals — especially what is tolerated — shape organisational culture and drive systemic drift.

🏛️ When AI Writes the Discharge Summary: A Governance, Duty, and Systems Integrity Challenge

As generative AI tools move rapidly into clinical workflows, their use in discharge summaries and medication instructions is often framed as a productivity gain. Emerging evidence, however, suggests a more complex reality. While AI can produce outputs that are complete, fluent, and consistent, safety-critical risks — including hallucinations, incorrect instructions, and uneven performance across patient groups — persist. This article reframes AI-generated discharge communication not as a documentation tool, but as a governance and systems integrity challenge requiring board-level oversight, clear accountability, and robust control design.

AI Discharge Summaries: Governance, Risk, and Systems Integrity in Healthcare |ISI | Institute for Systems Integrity
Generative AI can produce discharge summaries with impressive completeness and fluency. Emerging evidence shows persistent safety risks, reframing discharge automation as a governance and systems integrity challenge rather than a productivity solution.

Adding Value Through Ethical Leadership: Why Board Behaviour Shapes System Integrity

Ethics is often discussed in governance as culture, values, or compliance. But within complex organisations, ethical leadership functions as something more structural. Board behaviour shapes decision environments, influences how risks are surfaced, and determines whether integrity is reinforced or gradually eroded. In this article, the Institute for Systems Integrity examines why ethical discipline at the board level is not symbolic governance, but a core mechanism through which systems remain credible, resilient, and effective.

Ethical Leadership in Governance: Why Board Behaviour Shapes System Integrity | ISI | Institute for Systems Integrity
Ethical leadership is often treated as culture or compliance. In reality, board behaviour functions as a structural governance control layer shaping decision quality, risk visibility, and organisational trust. This article explores how ethical discipline at board level protects system integrity.

Carewashing: When “We Care” Becomes Organisational Self-Deception

Organisations increasingly speak the language of employee wellbeing. Leadership messaging emphasises that people matter, while wellbeing initiatives, resilience programs, and support services become more visible across workplaces. Yet many employees continue to experience chronic workload pressure, poorly managed organisational change, and inconsistent decision-making. This growing gap between organisational messaging and the lived experience of work is increasingly described as carewashing. This article examines how organisational expressions of care can unintentionally mask structural drivers of psychosocial risk and explores why genuine organisational care ultimately depends not on rhetoric, but on the design of work and the systems that protect people within it.

Carewashing: When “We Care” Becomes Organisational Self-Deception | Institute for Systems Integrity | ISI
Carewashing occurs when organisations signal care for employee wellbeing while the structural conditions shaping work remain unchanged. When wellbeing messaging replaces real system redesign, trust erodes, and psychosocial hazards persist beneath the language of care.

Water Governance in Healthcare Systems: A Planetary Boundary and Supply Chain Risk Analysis

Freshwater systems are a critical foundation of planetary stability, yet water governance rarely features in healthcare sustainability strategies. Healthcare depends heavily on reliable water for clinical care, sanitation, pharmaceuticals, and infrastructure, while global supply chains embed additional water risks. As climate change and over-extraction place increasing pressure on freshwater resources, healthcare systems face growing operational and systemic vulnerabilities. This paper examines water governance through the lens of planetary boundaries and supply chain risk, arguing that resilient health systems require stronger integration of freshwater risk into institutional governance.

Water Governance in Healthcare Systems | ISI | Institute for Systems Integrity
Freshwater systems are under increasing global stress, yet healthcare sustainability strategies rarely address water governance. This article examines water as a planetary boundary risk and explores how healthcare supply chains and infrastructure depend on stable freshwater systems.

AI as a Systems Stress Test 

Artificial intelligence is often discussed as a breakthrough technology capable of transforming healthcare and complex organisations. Yet as AI moves from experimentation to real-world deployment, a different reality is emerging. Rather than simply improving performance, AI frequently exposes deeper weaknesses in the systems it enters. In this paper, the Institute for Systems Integrity (ISI) examines why artificial intelligence often functions as a systems stress test—revealing hidden fragilities in data ecosystems, operational workflows, governance structures, and organisational readiness.

AI as a Systems Stress Test | Institute for Systems Integrity | ISI
Artificial intelligence is often presented as a transformative solution. In practice, AI frequently acts as a stress test for organisational systems. When deployed in complex environments such as healthcare, AI exposes weaknesses in data integrity, workflows, governance, and system readiness.

Low-Recoverability Plastics and the Governance Logic of Targeted Bans

Low-recoverability plastics represent a structural governance challenge rather than simply a waste-management issue. When products are designed with near-zero probability of recovery and high environmental leakage, downstream solutions such as recycling and clean-up become economically and operationally inadequate. In these cases, the core failure lies in product architecture and incentive design. Targeted bans, therefore, function not as symbolic environmental actions but as upstream governance instruments — correcting persistent design failures and preventing predictable harm before it enters circulation. From a systems integrity perspective, such measures reflect a transition from managing waste to managing risk at the level of design.

Low-Recoverability Plastics: Governance Logic of Bans | ISI | Institute for Systems Integrity
Small plastic items such as soy sauce containers expose a structural weakness in waste systems. This paper examines targeted bans as upstream governance controls that eliminate predictable failure points and restore alignment between product design and environmental system capacity.

Diversity as an Integrity Mechanism in Board Decision Systems

Diversity is commonly framed as representation. In complex governance environments shaped by AI disruption, sustainability transition, and systemic risk, it performs a far more critical function. In Diversity as an Integrity Mechanism in Board Decision Systems, ISI reframes diversity as a structural stabiliser within board architecture — strengthening weak-signal detection, ethical contestability, and decision resilience under stress. Perspective breadth is not symbolic. It is protective.

Diversity as an Integrity Mechanism in Board Decision Systems | ISI | Institute for Systems Integrity
Diversity is commonly framed as representation. From a systems-integrity perspective, it functions as a governance stabiliser. In complex domains such as AI and sustainability, diversity strengthens weak-signal detection, ethical contestability and decision resilience.

Bed Block as a System Integrity Failure - Flow Breakdown at the Acute–Rehabilitation Boundary

Hospitals described as “full” are often signalling a deeper structural issue. This paper examines bed block not as a simple shortage of beds, but as a breakdown of flow integrity at the acute–rehabilitation boundary. Drawing on systems integrity frameworks and published health system evidence, it explores how capacity constraints, funding design, and fragmented accountability can combine under sustained stress to produce predictable congestion — even in well-intentioned systems.

Bed Block as a System Integrity Failure | ISI | Institute for Systems Integrity
When hospitals are described as “full,” the underlying failure has often already occurred. This ISI analysis examines bed block not as a bed shortage problem, but as a breakdown of flow integrity at the acute–rehabilitation boundary under sustained system stress

Foundation Article#1

Decision-Making Under System Stress: Why Good People Make Predictably Weaker Decisions — and What Integrity Requires
Why capable people make weaker decisions under institutional stress — and what integrity requires when systems are strained.

Previous Publication:

Circularity Under Clinical Constraints: Why recycled material claims do not guarantee circular outcomes in healthcare

Healthcare increasingly adopts recycled-content materials in the name of sustainability. But recycled inputs do not guarantee circular outcomes. Clinical safety, contamination risks, regulation, and waste pathways often reshape what is truly recoverable. This paper examines the gap between circularity claims and system-level realities.

Circularity Under Clinical Constraints | ISI | Institute for Systems Integrity
Healthcare increasingly adopts recycled-content materials in the name of sustainability. But recycled inputs do not guarantee circular outcomes. Clinical safety, contamination risks, regulation, and waste pathways often reshape the lifecycle reality.

Digital Transition Risk: Why Non-Tech Boards Inherit Tech-Grade Exposure

Digital transformation is often framed as an operational or technological upgrade. This paper examines a less discussed reality: how digital dependency fundamentally reshapes enterprise risk. As organisations adopt cloud systems, electronic records, vendor-managed infrastructure, and AI-enabled tools, boards inherit technology-grade exposure irrespective of industry classification.

Digital Transition Risk and Board Governance | ISI | Institute for systems integrity
Digital modernisation is often treated as an operational upgrade. In reality, it transforms how organisational risk behaves. This paper examines why boards of traditionally structured organisations now inherit technology-grade exposure across cybersecurity, data, vendors, and AI.

Mentoring as Infrastructure: Learning, Power, and Risk in Organisational Design

Mentoring is widely treated as goodwill.
In practice, it behaves like infrastructure.
When designed well, it accelerates learning and strengthens judgment. When left to intention alone, it can narrow thinking, create dependency, obscure power, and amplify risk. This paper reframes mentoring as a learning control system, outlining benefits, predictable failure modes, and the safeguards required to protect judgment, independence, and decision quality.

Mentoring as Infrastructure: Learning, Power & Risk | Institute for Systems Integrity | ISI
Mentoring is often framed as goodwill, yet it functions more like infrastructure. When designed well, it strengthens judgement and learning. When left to intention alone, it can narrow thinking, create dependence, and quietly amplify organisational risk.

The Residual Risk Budget: Why “Net Zero” Still Requires Governance

Net zero is often described as a destination — emissions reduced, offsets applied, balance achieved. But this framing can obscure a critical governance reality. Even under credible net-zero pathways, residual emissions, residual harms, and residual uncertainties remain. They do not disappear; they are redistributed across systems, stakeholders, and time. This paper introduces the concept of the Residual Risk Budget — the remaining exposure that must be made visible, owned, and adaptively governed. Without this discipline, net zero risks become an accounting construct that masks ethical trade-offs and accelerates integrity drift.

Residual Risk Budget: Net Zero Still Needs Governance | Institute for Systems Integrity | ISI
Net zero is often framed as an endpoint. In reality, residual emissions, harms, and uncertainties persist. ISI introduces the Residual Risk Budget — a systems integrity lens that makes remaining exposure visible, owned, and governable across boards, regulators, and institutions.

Beyond Legality: Why Boards Must Ask “Should We?”

In contemporary governance, legality is often treated as the primary decision threshold. Yet many organisational failures arise not from illegal actions, but from decisions that were lawful, compliant, and ultimately indefensible. This ISI paper examines the critical distinction between “Can we?” and “Should we?”, arguing that resilient boards must govern beyond permission alone and embed integrity as a core decision discipline.

Beyond Legality: Why Boards Must Ask “Should We?” |Institute for Systems Integrity| ISI
Governance failures rarely stem from illegality. More often, they arise from lawful, compliant decisions that prove strategically or ethically unsound. This ISI paper explores why boards must move beyond “Can we?” and institutionalise the discipline of asking “Should we?”.

Governing Wicked Problems in Healthcare: An Integrity Architecture for AI, Sustainability, and Net Zero

Healthcare systems are entering a period of unprecedented complexity. Artificial intelligence, sustainability pressures, and net zero commitments are converging within institutions not originally designed to absorb this pace and scale of change. This paper argues that these challenges are best understood not as technical or compliance problems, but as wicked problems requiring a fundamentally different governance response.

Governing Wicked Problems in Healthcare | Institute for systems Integrity \ISI
Healthcare AI, sustainability, and net zero are not technical challenges with tidy solutions. They are wicked problems—complex, evolving, and resistant to linear control. This paper sets out an integrity-based governance architecture for holding risk, accountability, and adaptation under pressure.

Beyond AI Compliance: Designing Integrity at Scale 

This paper examines why most AI failures do not begin with flawed technology, but with governance systems that prioritise reassurance over judgment. As AI accelerates decision-making across complex organisations, traditional compliance frameworks struggle to detect drift, surface doubt, or correct harm before it becomes visible. This paper sets out a systems-level approach to AI governance—one that treats integrity as an architectural property, designed deliberately into authority, accountability, and the capacity to pause under pressure.

Beyond AI Compliance: Designing Integrity at Scale
Dr Alwin Tan, MBBS, FRACS, EMBA (University of Melbourne), AI in Healthcare (Harvard Medical School) Senior Surgeon | Governance Leader | HealthTech Co-founder |Harvard Medical School — AI in Healthcare | Australian Institute of Company Directors — GAICD candidate University of Oxford — Sustainable Enterprise Institute for Systems Integrity (ISI) Abstract Healthcare AI governance has entered

Governing AI in Healthcare: A Practical Integrity Architecture

This paper sets out why AI governance most often fails after deployment, not at approval. In real clinical environments, performance, safety, and accountability are shaped by workflow, staffing, training, and local judgment—not the model alone. This paper presents a practical integrity architecture for healthcare AI: designed to detect drift, preserve clinical judgment, and enable correction under operational pressure, before harm becomes visible to patients or boards.

Governing AI in Healthcare: A Practical Integrity Architecture
AI governance does not fail at approval. It fails when drift, workload, and accountability pressures appear after deployment. This paper outlines a practical integrity architecture for governing AI in real clinical systems.

The Systems Integrity Toolkit — Phase I
Why most integrity failures are not visible in time — and how systems allow harm to accumulate before anyone intervenes

Foundation Toolkit #1

This paper introduces the Systems Integrity Toolkit — Phase I, a governance architecture that consolidates ISI’s foundational research into a practical framework for identifying integrity risk before outcomes harden, showing how system stress, decision degradation, governance mediation, and failure dynamics interact long before harm becomes visible.

Systems Integrity Toolkit – Phase 1 | Institute for Systems Integrity
The Systems Integrity Toolkit – Phase I introduces a governance architecture for understanding how integrity fails under system stress and how organisations can intervene before harm occurs.

Most systems don’t fail because they can’t see the problem.
They fail because they can’t change the things they’ve learned to protect.

As a companion to the Systems Integrity Toolkit — Phase I, this paper examines why integrity risks persist even after they become visible. It explores systemic refusal — the quiet protection of certain variables from change — and shows how governance under pressure can stabilise harm rather than correct it. Together, the Toolkit and this analysis describe a familiar condition in complex organisations: clarity without permission to change.

What Systems Refuse to Change | Institute for Systems Integrity | ISI
This paper examines why systems resist change under pressure and how structurally protected variables shape governance behaviour and outcomes.

The Failure Taxonomy: How Harm Emerges Without Malice - Why most disasters are not caused by bad people — but by predictable system drift

Foundations Article #4

This paper introduces the Failure Taxonomy — a structural model showing how harm accumulates in complex systems through drift, signal loss, and accountability inversion, without anyone intending it.

The Failure Taxonomy | Institute for Systems Integrity | ISI
This paper introduces the Failure Taxonomy — a structural model showing how harm accumulates in complex systems through drift, signal loss, and accountability inversion, without anyone intending it.

The ISI Pause Principle explains why governance fails when reaction replaces reflection. Under pressure, systems that remove space between signal and response degrade judgment, suppress warning signs, and invert accountability. Pause is not a leadership trait — it is a governance control condition.

The Pause Principle: Governance Failure Under Pressure | ISI
A systems analysis of how urgency compresses judgment, suppresses signals, and accelerates governance failure — and why pause must be designed as a control condition.

Integrity is a System Property. Why outcomes reflect design, not intent

Foundations Article #3

Integrity is often treated as a personal trait. This paper shows why it is better understood as a system property — shaped by how authority, accountability, and information are aligned under stress, and why outcomes reflect design rather than intent.

Integrity Is a System Property | Institute for Systems Integrity| ISI
Integrity is often treated as a personal trait. This paper shows why it is better understood as a system property — shaped by how authority, accountability, and information are aligned under stress, and why outcomes reflect design rather than intent.

When the Constitution Becomes a Weapon
How governance drift turns compliance into a liability under system stress

This paper examines how constitutions, delegations, and oversight structures can remain legally intact while drifting out of alignment with real decision-making, allowing compliance to persist even as governance control erodes.

When the Constitution Becomes a Weapon | Institute for Systems Integrity
Governance failure rarely begins with misconduct. It begins when constitutions, delegations, decisions, and oversight drift out of alignment under pressure. This paper explains how compliance can persist even as integrity quietly erodes.

Why Oversight Fails Under Pressure

How system stress distorts visibility, weakens governance, and produces predictable outcomes

Foundations Article #2

Why Oversight Fails Under Pressure | Institute for Systems Integrity
Governance systems are designed for stability. Under sustained stress, visibility distorts, oversight becomes selectively blind, drift normalises, and outcomes become predictable.

When Resilience Appears, Governance Has Already Failed. Why frontline heroics are a warning signal — not a success story

A companion paper to Why Oversight Fails Under Pressure examining how human resilience conceals system failure.

When Resilience Appears, Governance Has Already Failed | ISI
When frontline teams keep systems functioning through heroics and sacrifice, governance has already failed. This ISI paper explains how resilience hides systemic risk.

Frameworks

The Systems Integrity Cascade — Understanding Harm in Complex Systems
Learn the Systems Integrity Cascade framework: how system conditions, decision integrity, governance mediation, and failure dynamics interact to produce outcomes in complex institutions.
Oversight Blindness Pathway (Derived View) | ISI
A simplified, derived view of how the Systems Integrity Cascade unfolds under sustained system stress—showing how visibility distorts, governance loses sensitivity, drift normalises, and outcomes become predictable.
Integrity as a System Property | Institute for Systems Integrity |ISI
A derived governance lens explaining how authority, accountability, and information alignment produces integrity and outcomes under system stress.
The Failure Taxonomy | Institute for Systems Integrity | ISI
A derived governance framework showing how drift, signal loss, and accountability inversion produce harmful outcomes in stressed systems.
The Pause Principle | Governance Control Condition – ISI
A governance framework explaining how the loss of pause accelerates failure under pressure — and why calm must be designed into systems, not demanded of individuals.
Integrity Protection Stack (IPS) | Institute for Systems Integrity | ISI
The Integrity Protection Stack (IPS) explains how integrity is preserved under system stress through layered system design rather than individual resilience or heroics.

Publication Index

Publications Index | Institute for Systems Integrity
An overview of the Institute for Systems Integrity’s published work, highlighting foundational papers and thematic groupings.