AI agents promise to 'run the business,' but who is liable if things go wrong?

Foto: The Register
One trillion dollars—this is the estimated market potential of AI agents, which, according to giants like Oracle, are set to independently "run businesses" by making decisions in areas such as HR, finance, and supply chains. However, behind the vision of autonomous systems lies a significant legal loophole: it remains unclear who will bear liability when an algorithm makes a costly error. While technology providers promote tools capable of reasoning and executing processes, their lawyers are securing contracts against the consequences of unpredictable behavior from non-deterministic AI systems. For business users worldwide, the situation is clear yet risky. Regulatory bodies, such as the British Financial Reporting Council, are explicitly communicating the principle: "you cannot blame the box." This means that for errors in financial reports or audits, a human and a specific company will be held responsible, not the software provider. LLM model hallucinations in regulatory documentation or logistics errors could become a direct liability for enterprises that place too much trust in artificial intelligence. In the era of the agentic revolution, the key challenge is not the technology itself, but the renegotiation of contracts and the precise definition of boundaries for liability regarding decisions made by code rather than an employee. The race for business automation is now forcing the creation of new Governance standards and technological risk insurance.
The vision of autonomous AI agents taking the helm of key business processes has ceased to be the domain of science fiction. The biggest players in the enterprise software market, such as Oracle, Salesforce, and SAP, are promising a revolution in finance, HR, and supply chain management. However, behind the facade of impressive presentations lies a fundamental question that keeps legal directors awake at night: who will bear responsibility when a digital employee makes a wrong decision with million-dollar consequences?
The situation is becoming tense because the stakes are unprecedented. According to Gartner forecasts, by mid-2026, unlawful decisions based on AI will generate over $10 billion in costs related to compensation and remediation of failure consequences. The problem is that while technology evolves at an exponential pace, legal frameworks and contract provisions lag behind, creating a dangerous gap in liability.
"The box is not to blame" – the firm stance of regulators
For market oversight institutions, the matter is simple: technology does not exempt one from responsibility. Mark Babington, executive director of the UK's Financial Reporting Council (FRC), bluntly cuts off any attempts to shift blame onto algorithms. "You can't blame the box," he stated in an interview with the Financial Times, emphasizing that companies and specific individuals remain responsible for audit quality or reporting reliability, regardless of the tools they use.
Read also
This approach puts end-users in a difficult position. On one hand, providers like Oracle promote their AI Agent Studio as systems capable of "actively running a business" while maintaining full security. On the other hand, when it comes to legal specifics, sales enthusiasm collides with the hard reality of warranties. Malcolm Dowden, a lawyer at Pinsent Masons, notes that traditional software was predictable, allowing for a clear definition of liability. Agentic AI, non-deterministic by nature, introduces variability that providers are unwilling to guarantee.
The trap of non-deterministic code
A key technical and legal challenge is the fact that modern AI models do not operate according to rigid patterns. The same instruction can yield different results, causing providers to defend themselves against taking on risk for "unexpected behaviors." In contract negotiations, we are currently observing a kind of tug-of-war:
- Users demand guarantees regarding the lack of bias in models and the correctness of results.
- Providers push back, arguing that errors may result from interactions between the model and specific user prompts (commands) or data entered by the client.
- Instead of full responsibility for the outcome, providers prefer to offer tools for monitoring, observability, and auditing.
As Georgina Kon, a partner at Linklaters, notes, the risk of "magnification" is enormous. An error committed by an AI agent can be replicated thousands of times in a fraction of a second before any human notices it. This cascading action makes tech giants like Microsoft or SAP avoid explicit declarations regarding assuming liability for the errors of their autonomous systems.
Defensive AI as a new survival strategy
In the face of legal ambiguity, Lydia Clougherty Jones from Gartner suggests adopting the concept of "defensive AI." Companies must stop treating agents as black boxes and start implementing mechanisms that allow for repeated and effective justification of every decision made by the system before supervisory authorities. This means a need for radical improvement in model explainability and the implementation of so-called guardian agents – special AI units whose sole task is to oversee and catch anomalies in the operation of other agents.
Organizations that ignore this aspect expose themselves not only to financial losses but also to criminal liability. This particularly applies to sensitive sectors such as HR, where automated CV filtering can lead to accusations of systemic discrimination. In such a scenario, the British ICO (Information Commissioner's Office) clearly indicates: the organization as the data controller bears responsibility unless it manages to transfer this risk to the provider through precise contractual provisions.
"When AI agents begin acting on behalf of an organization, decision risk becomes ambiguous and unpredictable. This signals a redistribution of AI risk with unknown parameters." – Lydia Clougherty Jones, Gartner VP Analyst.
Revenue versus risk: who will blink first?
AI investments are expected to reach $2.52 trillion this year. Such massive amounts mean enormous pressure for a return on investment, prompting providers toward aggressive marketing. However, the silence of companies like Workday, Salesforce, and ServiceNow regarding specific legal commitments is telling. The industry is in a "soft-launch" phase, testing how much risk the market is willing to accept.
It can be predicted that in the coming years, we will witness a series of precedent-setting lawsuits that will ultimately shape market standards. Until then, companies opting for full autonomy of their business processes must be aware that in a clash with a regulator, the "algorithm error" argument will be as effective as trying to convince a police officer that it was the car, not the driver, that exceeded the speed limit. Responsibility remains human, even if the decision was entirely digital.
More from Industry
Trump warns Iran '48 hours before all Hell will reign down,' while search for missing crew member intensifies
Polymarket removes wagers on U.S. service member rescue mission in Iran
OpenAI's Fidji Simo takes medical leave, announces leadership changes
'Chasing vibes' — OpenAI's M&A strategy gets more confusing with TBPN purchase
Related Articles

Hackers Are Posting the Claude Code Leak With Bonus Malware
Apr 4
Ex-Microsoft engineer believes Azure problems stem from talent exodus
Apr 4
Trump wants to take a battle axe to CISA again and slash $707M from budget
Apr 3

