SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
New Zealand

AI agents multiply risk, says DigiCert chief product officer

Fri, 1st May 2026 (Today)
Anthony Caruana
ANTHONY CARUANA Interview Editor

Technology teams often talk about how AI can be a force multiplier because it can be used to accelerate and automate complex workflows. But agentic AI is also a risk and complexity multiplier. Every AI agent that's deployed acts is the equivalent of another employee being onboarded. And that means its identity and activity need to be monitored and secured.

When the big data era started over a decade ago, the challenges of volume, velocity and veracity were highlighted. The same issues are now presenting themselves in the age of agentic AI. A developer can deploy dozens, or even hundreds, of agents in less time than it takes to read this paragraph.

Deepika Chahuan, the Chief Product Officer at DigiCert, says everything in the digital world – servers, agents, devices and software, need to talk to each other. This communication puts identity and trust at the forefront of AI security.

"The sheer scale and machine speed at which today's digital transactions are occurring has changed the rules when it comes to security. DigiCert provides visibility and automation solutions for the entire trust infrastructure across software, machines, content, devices and messaging."

Organisations struggle with AI security because they often don't know what assets they have. For example, certificate management has become more complex. Deepika says many organisations don't know how many certificates they have. But that's just the first challenge they must address.

"It's such a foundational element. If you don't have visibility into your entire trust surface, across machines, agents, content and devices, it's difficult to manage and enforce policy. So that's the first thing from a visibility point. The second thing is distributed operations. Decentralised operations make visibility harder and increases the difficulty of enforcing consistent security policies across the trust surface."

As AI has become more accessible over the last three or four years, we're now starting to see organisations move from AI experiments and proofs of concept into production environments. But that shift is not a centralised process. AI is now a part of many applications and platforms. As business teams adopt new or updated tools, AI is finding its way into production. The decentralisation of services means security is often an afterthought as teams are prioritising productivity.

Deepika says the risks this brings can be considered across three dimensions. Firstly, the trust surface is increasing. With a single developer able to deploy dozens of agents, that puts identity management at the frontline of the security battle. The second dimension is a rapidly changing software supply chain. AI generated software can introduce new vulnerabilities – for internally developed software and for malware that might be introduced by a threat actor. The third dimension is more positive. AI's ability to connect the dots across systems means security teams have access to more insights.

It's visibility that Deepika says is the biggest issue many organisations face.

"When we talk to our customers the number one thing which always comes up is not knowing how to begin providing governance because they don't know what they have. We conducted a customer survey about the location of their AI agents. We asked if they were coming from hyperscalers, from desktop applications or from SaaS. But the fourth option was 'We don't know'. And it was the most popular choice."

Most organisations, Deepika says, have a good understanding of their physical and data assets but when it comes to AI agents their visibility falls away. But she believes current technologies, intelligently deployed, can solve this challenge.   

PKI and DNS provide a strong foundation to extend the trust for AI agents and models. Gartner says that agents are not substantially different to cloud workloads. That means DNS can be used to ensure agents are only interacting with authorised systems or other agents. Similarly, PKI can be used to ensure those communications are secure.

"DNS is handling trillions of queries a day," says Deepika. "It needs to be adapted but it can be used. This is what we do at DigiCert. Software signing exists but when we extend that technology we can protect the models, training data, the weights and where it is getting executed like a confidential computer environment. We are taking the foundation and adapting to the problem at hand."

One of the challenges that comes is ensuring security does not impact the user experience. That means finding ways to integrate security to AI without developer intervention. That ensuring that when an agent is created that it comes up with its own identity without the developer needing to do anything. Deepika notes that DigiCert's passport solution identifies the agent, how long it is valid for, where it can go and what model it is using. That passport comes with a kill switch so the agent can be easily disabled if needed.

By focusing on visibility and leveraging the right technologies, it is possible to securely deploy AI agents at scale. Tools including DNS and PKI can provide the foundation for secure agentic AI projects. As the number of AI agents continues to grow, ensuring you have a robust and trusted platform is essential for protecting the enterprise.