SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Flux result 929b3280 4f28 43e8 a8b0 ff57ea3c5716

Anthropic's 'Mythos' signals a new era of AI-driven cyber threats

Tue, 21st Apr 2026 (Today)

Anthropic is preparing a controlled rollout of its Mythos artificial intelligence model, a system that has drawn significant attention across the cybersecurity sector for its ability to identify and exploit software vulnerabilities beyond the capabilities of existing tools.

Until now, Mythos has been restricted to a small group of security partners due to concerns about its potential offensive cyber applications. The company is continuing to limit wider deployment under an initiative known as Project Glasswing, working with selected organisations to test the model in defensive environments.

The development comes amid growing concern that AI is shifting from automating known cyberattack methods to independently discovering new vulnerabilities. Previous generative AI tools largely helped attackers scale existing techniques using public exploit data and established hacking playbooks. Mythos, by contrast, is being positioned as a system capable of identifying entirely new weaknesses in software and infrastructure.

The model has already been linked to the discovery of a long-standing flaw in the Linux kernel that had gone undetected for decades, intensifying debate over how frontier AI systems should be governed and deployed.

Mark Stockley, cybersecurity evangelist at ThreatDown, said the system represents a fundamental shift in the threat landscape.

"Mythos is finding novel ways to hack systems. AI has used the human hackers' playbook to breach systems. It gives criminals scalability and speed, but not new tactics. Mythos is the first sign of a future in which AI can devise novel ways to attack targets that human hackers have not considered. This presents an entirely new challenge for defenders that generalist IT staff will not be able to keep up with. We have time to prepare, but it is inevitable. Defending against autonomous AI agents that can think on their feet and come up with novel tactics will require expert human analysts armed with the latest threat intelligence, backed by their own ever-vigilant AI agents," said Stockley.

Within financial services and other regulated industries, the emergence of systems like Mythos is accelerating discussions around cyber resilience. Organisations are increasingly shifting focus from preventing every breach to limiting damage once systems are compromised, particularly as AI reduces the time required to discover vulnerabilities or misconfigurations.

At the same time, enterprises are being forced to reconsider how they govern their own use of AI. Many already deploy machine learning for fraud detection, transaction monitoring, and operational analytics, adding complexity to monitoring and securing internal AI systems.

Nik Kairinos, chief executive of RAIDS AI, said Mythos highlights a broader governance challenge.

"What makes Mythos significant is not only the capability, but what Anthropic chose to do with it. A frontier model, without instruction, surfaced a Linux kernel vulnerability that had gone unnoticed for 27 years. Restricting release to critical infrastructure partners is the right call, but it only buys time. When finance ministers, central bank governors, and the CEOs of major banks are publicly concerned about a single AI model, the framing has already shifted. We are no longer debating whether frontier AI creates systemic risk. We are watching institutions scramble to catch up to capabilities that are already in the wild."

"The harder problem sits downstream. You cannot prevent every zero-day from being found, by AI or otherwise. What you can do is monitor every AI system in your estate for anomalous behaviour, in real time, with a continuous evidence trail. The organisations that instrumented their AI before this week are in a very different position from those still treating governance as an annual audit exercise," he said.

Security practitioners say the rise of autonomous vulnerability discovery will widen the gap between attackers and defenders, increasing demand for specialist analysts capable of interpreting complex, AI-generated attack patterns. Generalist IT teams, they argue, will struggle to keep pace without deeper tooling and intelligence support.

Vendors and consultancies are responding by promoting hybrid defence models that combine human expertise with continuously operating AI systems designed to detect subtle indicators of compromise.

Michael Vallas, global technical principal at Goldilock Secure, said this shift will force a redesign of how organisations structure their defences.

"The imminent rollout of Mythos to UK banks fundamentally shifts the economics of cyber risk for financial institutions. With AI enabling vulnerability discovery at unprecedented scale and speed, the strategic priority moves from eliminating every flaw, which is unattainable, to containing breaches and limiting lateral movement once an exploit occurs. Software-defined controls alone cannot address this new reality."

"Financial boards must now prioritise architectures that deliver enforced isolation and segmentation, ensuring that even sophisticated attacks remain contained with minimal business impact. True resilience will be measured not by perfection in defence, but by the ability to compartmentalise environments and protect critical assets when vulnerabilities inevitably surface," he said.

As frontier AI systems continue to evolve, the broader debate is shifting toward containment, governance and systemic risk, reflecting a growing recognition that cybersecurity strategy is entering a new phase defined by autonomous, adaptive threat generation.