The risk of not moving: Why waiting is no longer the safe option for regulated industries

By Chris Smith

April 07, 2026

General

Risk aversion is a feature of regulated industries, not a flaw. Aviation, construction, financial services, healthcare and manufacturing operate in environments where the consequences of getting things wrong are serious. Caution is professionally ingrained, institutionally rewarded and, in many cases, legally required.

But caution applied uniformly — to operations and to operational change — produces a specific failure mode. It mistakes inaction for safety. And in a period of genuine technological shift, the cost of waiting is rising faster than most organisations have accounted for.

Why regulated industries move slowly on new technology

The structural reasons for slow technology adoption in regulated environments are well understood. Audit obligations create conservative change management instincts. Procurement cycles are long. Risk functions are resourced to identify downside, not to model the cost of missed opportunity. Leadership accountability is asymmetric: the person who moved too fast and had a problem is visible; the person who waited and fell behind rarely is.

These dynamics are not irrational. They reflect the genuine asymmetry of consequences that regulated industries face. But they create a consistent pattern: technology adoption that lags the broader market by several years, often adopted only once early movers have absorbed the implementation risk and produced the evidence base that risk functions need.

The challenge is that this lag strategy works less well when the pace of change accelerates. When the gap between adoption and non-adoption compounds — when early movers build capability, institutional knowledge and operational advantage while late movers remain static — the calculus shifts.

The real cost of the 'wait and see' position

Organisations holding a 'wait and see' position on AI in regulated environments typically frame the risk as implementation risk: what goes wrong if we move too early? The more relevant question is increasingly what the cost is of not moving at all.

This cost operates across three dimensions:

Dimension What it looks like in practice
Operational efficiency Competitors using AI-assisted workflows complete compliance tasks, incident reviews and audits faster. The gap compounds over time.
Talent and capability Organisations that have not built AI literacy or implemented AI tools are less attractive to practitioners who expect to work with modern systems.
Data advantage Organisations building structured, AI-ready data foundations now will have significantly richer datasets to draw on when AI capabilities mature further. Delay means a data gap that is difficult to close retroactively.

None of these costs appear on a risk register in the way that a failed implementation would. That asymmetry — visible downside risk, invisible opportunity cost — is itself part of what makes wait-and-see a persistent organisational default.

What the internal conversation actually looks like

Organisations that have moved on AI in regulated environments — including those in EHS-intensive industries — report a recognisable pattern in the internal conversation that precedes action.

The initial trigger is rarely a strategic mandate. It is more often a specific operational frustration: a reporting process that takes too long, an audit preparation cycle that consumes disproportionate resource, a recurring manual task that a single person recognises as automatable. Someone in the organisation raises the question.

What follows is a period of informal exploration — reading, conversations with peers in other organisations, attendance at industry events — before the idea surfaces formally. At that point, the risk function and legal or compliance teams typically become involved, and the conversation shifts from 'could this work?' to 'what would make this safe to try?'

Getting leadership on board tends to require three things: a credible answer on data security, a clear scope for a pilot, and evidence from comparable organisations that the technology works in a regulated context. The last of these is often the hardest to produce internally — which is why peer testimony carries significant weight in these decisions.

The fears that nearly stop organisations moving

The concerns that surface in these internal conversations are consistent across industries and organisation sizes. Understanding them is useful because they reveal where assurance effort needs to be concentrated.

  • Data security: will incident data, audit records or sensitive operational information be exposed to a third-party AI system? This is typically the first objection and the hardest to address without specific technical evidence.
  • Explainability: if an AI system contributes to a compliance decision, can that decision be explained to a regulator or auditor? The inability to answer this question confidently stops many organisations before they start.
  • Accuracy: what happens when the AI is wrong? In a regulated environment, an AI error in a safety or compliance context carries consequences that a spreadsheet error does not.
  • Workforce response: how will frontline workers and middle managers respond to AI tools? Will this be received as a threat or a productivity tool?

These concerns are legitimate. The organisations that move forward are not the ones that dismiss them — they are the ones that find credible, specific answers to each. The difference between the organisation that moves and the one that doesn't is usually whether someone internal has taken ownership of finding those answers.

Why the risk of waiting now outweighs the risk of moving

The argument that regulated industries should wait until AI is more proven rests on an assumption that the current moment is early and high-risk. That assumption is increasingly difficult to sustain.

AI in operational and compliance workflows is not a pilot technology in 2025. Organisations in aviation, construction, mining and manufacturing have implemented AI-assisted safety and compliance tools and produced documented results. The implementation risk that justified early caution has been substantially reduced by this accumulated evidence.

What remains is execution risk: the risk that a specific implementation, in a specific organisation, does not deliver what was promised. That is a real risk. It is also a manageable one — particularly if the implementation is scoped carefully, piloted in a low-stakes environment and evaluated against specific operational metrics.

The risk of waiting, by contrast, is structural. Every quarter in which competitors build AI capability, data foundations and institutional knowledge is a quarter that cannot be recovered. The organisations that are most confident in their AI implementations in 2027 are, with high probability, the ones that started in 2024 or 2025.

Starting well is not the same as starting big

The single biggest misconception about AI adoption in regulated industries is that moving forward means a large-scale, high-risk transformation programme. It does not.

The organisations that navigate this transition most effectively tend to start with a specific, bounded problem: one process, one team, one use case. The goal of the initial phase is not transformation — it is learning. Learning what the technology can and cannot do in a specific operational context, what the data requirements are, how the workforce responds and what governance structures are needed.

That learning is the foundation on which broader adoption is built. Organisations that start small, learn fast and expand deliberately consistently outperform those that either delay until they can go big or attempt enterprise-wide transformation without the foundational learning.

The question for regulated industries is not whether to move on AI. It is whether to move intentionally or be moved by circumstances.

resource image

Hear from the people who've already made the move

Swissport and Holcim share an honest account of what it took to get AI off the ground in a regulated environment — the internal conversations, the fears and why they decided waiting was the bigger risk.

Still weighing it up? So were they.

Join our free summit and hear directly from EHS leaders who sat where you are now — and what finally made the case to move.

Chris brings over a decade of experience in digital marketing, specializing in content strategy and organic visibility across diverse industries and sectors. His goal is to identify people's challenges and connect them with practical, effective solutions that truly make a difference.