A single AI change can easily pull in QA, IT, operations, and sometimes commercial at the same time.
Validation is becoming the hidden cost of AI in regulated companies.
A lot of AI projects look cheap until they touch a validated workflow. Then the real cost shows up in change control, documentation, review cycles, and the internal capacity needed to carry the whole thing through.
The expensive part usually starts after the demo works.
The real cost often appears before production, in change control, review cycles, and documentation work.
A narrow pilot gets expensive quickly when nobody defines intended use, ownership, or the review model early.
One of the easiest ways to misunderstand AI work in a regulated company is to treat the prototype as the expensive part.
Usually it isn’t.
The expensive part starts when the idea looks promising enough that somebody asks the obvious next question: what would it take to use this in a real validated workflow?
That’s when the conversation changes. Suddenly the issue is not model quality or latency or whether the demo looked convincing. It is what changed, who approves it, what has to be documented, how the risk gets described, what testing is enough, and whether the team even has the capacity to move it through change control.
That is why validation is becoming the hidden cost of AI in regulated companies. The model might be the new thing, but the system it lands in usually is not.
Why the prototype is the easy part
Most teams can move quickly when the work is still framed as exploration. You can test ideas, run a pilot, try a vendor, or prove out a narrow use case without forcing the whole organisation to react.
The friction starts when the AI system begins touching a workflow that already has controls around it. In a regulated setting, that means you are no longer just adding a feature. You are changing part of an operating environment that may already be documented, validated, audited, or tied to product quality.
That changes the economics fast.
What the hidden work actually looks like
I think this is the part a lot of people outside regulated industries underestimate. Validation work is not one neat checklist at the end. It is a pile of interlocking tasks that show up around the build.
Somebody has to define the intended use. Somebody has to describe the workflow that is changing. Somebody has to write test evidence that makes sense for the level of risk. Somebody has to decide how human review works. Somebody has to explain what happens if the model output is wrong, late, or inconsistent. And somebody has to own the documentation when an auditor or inspector asks for it later.
None of that is glamorous. But that is where months disappear.
Why AI makes the problem feel worse
AI adds a layer of discomfort because it makes people feel like the system is moving underneath them. Even when the use case is sensible, teams worry about drift, output quality, explainability, role boundaries, and whether the control model still makes sense.
That concern is not irrational. It just means the governance and validation model has to be thought through earlier than most teams expect.
In Europe, the AI Act only adds to that pressure. In manufacturing and quality-sensitive environments, teams are already trying to reconcile existing obligations like Annex 11 with newer AI-specific expectations. Even when the legal picture is still settling, the operational response is obvious: document more, review more carefully, and slow down when ownership is fuzzy.
Why fractional leadership helps
This is another place where the fractional model makes practical sense. Most companies do not need a permanent Head of AI sitting inside the business full time just to shepherd one change-control heavy project through the system.
What they usually need is senior judgment at the moment the prototype has to become a governed piece of real work. Someone to narrow the scope, define the intended use properly, get the right teams aligned, and keep the validation burden proportional to the actual risk instead of letting the whole thing become theatre.
That job sits in the gap between engineering, QA, IT, and operations. It is as much about getting decisions made as it is about the technology itself.
The practical read
If an AI project inside a regulated company looks strangely expensive compared with the demo that started it, I would check these questions first:
Is the intended use actually clear? Has anyone defined what the human review model is? Does the team know which parts of the workflow are validated already? Is the change-control path understood? And is anybody senior enough to keep the scope from drifting while the documentation burden grows?
If the answer to those questions is no, the cost will not stay hidden for long.
It will show up in meetings, review cycles, delayed approvals, and internal fatigue. That is usually the moment when teams realise the prototype was the easy bit.
Michael Kilty is the founder of Arvanu and fractional Head of AI at NPLabs, a compounding pharmacy in Athens. Arvanu works with pharma and healthcare companies navigating AI in regulated environments.
Governance and validation usually travel together.
Most of the time, the validation burden gets heavy because the governance layer underneath it was never properly sorted. The first article explains where that drag usually starts.
See all notesIf validation is eating the budget, narrow the problem first.
The useful first conversation is usually about scope, ownership, and what needs to be true before a pilot can become part of a real regulated workflow.