SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers

Netskope's Tony Burnside - visibility is key to AI security

Mon, 20th Apr 2026 (Today)

Peter Drucker's famous quote about not being able to manage what you don't measure rings true in the AI age. You can't secure what you don't see. Or, more critically, you can't secure what you're not looking for. The challenge of AI is that many of the agents and tools that organisations are integrating are finding data and executing processes with little oversight.

Most of the focus on AI security has been on north-south traffic between users and AI apps. While preventing data leakage through AI tools remains important, east-west traffic created by AI tools that work across systems and applications is a new front in the battle to secure data

"It's a challenge that organisations must address quickly", says Tony Burnside, the SVP for APJ at Netskope. "Current tools don't give the visibility that teams demand into north-south and east-west traffic. I think the east-west traffic with MCP [Model Context Protocol] is growing faster than ever."

The rise of AI agents

Tony likens the rapid growth of AI agents to the challenges security teams faced a decade ago when shadow IT was a significant risk. Back then, users were bringing new apps into their work. Today, they are arming themselves with new AI tools. Often, they are using these tools without understanding what they are bringing into their organisation.

MCP brings a whole raft of new concerns. Tony says that understanding context is critical. While a single data point in a log might not look significant, it's important to look at how multiple pieces of information fit together.

"Netskope has always had a deep understanding of data and the context around it. When you come to MCP there's things like context poisoning, where malicious data is injected into the communication, trying to get the AI to do something it shouldn't. That might be invoking privileged tools, data over-collection, or trying to access sensitive information. What if one MCP server is compromised and contaminates and compromises others?" Tony says. 

Decoupling security from the AI models is an important step. While many AI models include security features, creating a security fabric that spans across tools and provides a consistent set of rules and performance is key. Risks must be mitigated without degrading performance.

The controls that are put in place need to address north-south and east-west traffic.

Security controls can't be two-dimensional

Tony says, "Security controls need to be omni-directional. Organisations need to ensure users are not sending sensitive information such as PII, health information or intellectual property to tools outside the corporate security bubble. This is where robust DLP [data loss prevention] tools are critical."

While a single prompt might look safe, a series of ten seemingly innocuous prompts might be an indicator of compromise. DLP tools that protect against AI-based threats need to be much smarter than traditional tools that looked for keywords and specific data types.

From a network perspective, organisations have traditionally managed their outbound traffic through a proxy mechanism. While this remains a valid architecture, Tony says many of the compromises he has seen have occurred when organisations have created exceptions that bypassed that control – often to boost performance.

"If there's any exception or bypass you create a potential vulnerability," says Tony. "You really cannot have trade-offs between security and performance. You must have security without adversely affecting performance. But organisations need to reduce their risk as well. Architecture matters and you must architect it right so that you can see everything and control everything."

New problems need new solutions

AI brings new risks that can't always be mitigated using old tools and methods. New gateway tools, such as the Netskope One AI Gateway, are important as it enables organisations to apply controls without having to send data through the cloud or off their network. The signals that come from AI tools need to be sent through to a SOC where policies can look at the context of AI activity to detect anomalous behaviours.  

"SOCs are designed to receive and process large volumes of security data. With AI, the volume of data that is generated is far greater than other tools. But by bringing that data into a central location, it's easier to process and look for potential compromises and data leakage. We've seen customers dealing with 14 million alerts per month. Having the tools to process those and detect issues is critical," Tony explains.

Tony also advocates red teaming AI models to aggressively seek out vulnerabilities and ensure AI tools aren't susceptible to prompt injection, jailbreaking or other compromises.

"You've got to allow organisations to innovate. They're doing some really great things with AI, but you've got to secure it and give them the performance they need."