SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Ai agents outpacing security with puzzles locks shields chains

AI agents race ahead of governance, security & trust

Fri, 23rd Jan 2026

Artificial intelligence agents are moving into mainstream use in large organisations without matching investment in safety and oversight. This is exposing gaps in governance, security and accountability frameworks, according to industry executives responding to new research and policy debates this week.

Deloitte's latest survey on AI agents reports that only 21% of organisations have robust governance or oversight in place for these systems, even as adoption accelerates. The firm found that 23% of companies already use AI agents, a figure it expects to reach 74% within two years. Businesses that do not use agents at all are projected to fall from 25% today to 5% over the same period.

The findings echo growing concern among technology leaders that autonomous and semi-autonomous software agents are outpacing traditional risk controls built for more static AI models and human-centric workflows.

Governance gap

Ali Sarrafi, CEO and Founder of Kovant, said many organisations are not accurately understanding the risk.

"The real issue Deloitte is highlighting isn't that AI agents are dangerous, that would be unfair to say. The issue is that too many are being deployed without proper context and governance. When agents operate as their own entities, no one can clearly explain what they did, why they did it, or what controls were in place. That's what makes risk hard to manage and almost impossible to insure."

Sarrafi argued that organisations need to design agentic systems with mechanisms that make decisions and actions traceable.

He said, "The answer is governed autonomy. Well-designed agents with clear boundaries, policies and definitions managed the same way as an enterprise manages any worker can move fast on low-risk work inside clear guardrails, but escalate to humans when actions cross defined risk thresholds. With detailed action logs, observability, and human gatekeeping for high-impact decisions, agents stop being mysterious bots and become systems you can inspect, audit, and trust."

He said that competitive advantage would depend less on first-mover adoption and more on whether companies embed accountability into their operating models.

"Over the next few years, agents will become a core operating layer inside large enterprises. The winners won't just be the companies that deploy them first, but the ones that deploy them well, with accountability, visibility, and control built in from day one," said Sarrafi.

Data security strain

Security specialists are also warning that agentic AI challenges long-standing data protection assumptions. Traditional models of access control and monitoring presuppose human-paced activity and largely reactive oversight. Agentic systems can initiate actions, chain tools together and move sensitive data across environments in seconds.

Eran Barak, Co-Founder and CEO at data security firm MIND, said current control models are not designed for this behaviour.

"Agentic AI is breaking the data security paradigm we've all known and relied on. AI Agents don't just access data, they act on it, deciding, moving and creating at machine speed, which makes human-centric controls obsolete. In this new reality, trust depends on continuous awareness of what data matters and the ability to protect it at the speed of AI," said Barak.

The comments reflect a wider shift in security thinking from static perimeter defences to continuous context-aware monitoring of how and where data flows once agents can initiate transactions, generate content and modify records autonomously.

Cyber defence risks

Concerns about AI agents in security operations have also surfaced. Experts have spoken out about how organisations are struggling to train human staff to prevent cyberattacks and now face the dual task of hardening both humans and AI agents against social engineering and deception.

Cyber security providers testing AI agents inside Security Operations Centres (SOCs) are increasingly exploring uses that include automated threat triage, correlation of alerts and recommendation of response actions. The work highlights both the promise and the fragility of current systems.

Martin Jakobsen, Managing Director at security firm Cybanetix, said early experiments have exposed serious failure modes.

"We've been assessing the potential for AI agents to detect threats and recommend a course of action with a view to the technology being used to assist SOC analysts. This makes sense because many detections will often share the same remedial actions, so automating that can reduce analyst workloads substantially. However, during our assessments we found that one particular AI got the detection and response spectacularly wrong.

"Not only did it misinterpret the threat, it then went on to produce a fictitious kill chain and mitigation advice, all of which would have taken the SOC analyst down a rabbit hole they didn't need to go down. So, should the technology be used as a tool for defence? Absolutely, yes, in time, but it's a work in progress. The cyber sector cannot afford to abandon efforts because AI can and will speed up the cat and mouse game between attackers and defenders," said Jakobsen.