AI vulnerability reports surge as hackbots reshape cyber risks
HackerOne's latest report reveals a significant increase in artificial intelligence (AI) vulnerability reports alongside a rapid expansion in AI adoption across organisations.
The 9th annual Hacker-Powered Security Report, themed "The Rise of the Bionic Hacker", details how AI is reshaping risks within digital infrastructures, with vulnerabilities in AI systems accelerating at a faster pace than previous years. This transformation has seen organisations expand programmes involving AI by 270%, while prompt injection attacks have emerged as the most pressing security issue, rising by 540%.
AI vulnerability trends
The findings indicate a 210% increase in AI vulnerability reports on the HackerOne platform. More than $2.1 million was paid out in bounties for AI-related vulnerabilities, representing a 339% year-over-year growth. The report also highlights a 152% increase in sensitive data leaks, emphasising the complex risk landscape for AI-driven technologies.
Prompt injection-a technique allowing attackers to manipulate AI model outputs by changing user input-has become the fastest-growing AI attack vector. The report recorded a 540% surge in valid prompt injection reports, underscoring the challenge organisations face in controlling how AI systems interpret and respond to inputs.
Another growing concern is proper access control. According to the report, 13% of organisations experienced an AI-related security incident in 2025, and 97% of those lacked adequate access management mechanisms.
Return on mitigation impacts
HackerOne states that its programmes collectively avoided $3 billion in breach losses in 2025 by using its Return on Mitigation (RoM) methodology to measure impact. In terms of scope, HackerOne's customers included 1,121 new AI assets in their security initiatives, an increase of 73% from the prior year.
The increasing focus on AI threats is also reflected in customer sentiment, with 79% expressing heightened concern about associated risks. Total payouts across all bug bounty programmes reached $81 million, up 13% year-on-year.
AI-driven security research
The report finds that security researchers are increasingly adopting AI-native approaches, with 70% now integrating AI tools in their security workflows. These tools support tasks such as exploit development, automated reconnaissance, and streamline reporting processes. Furthermore, 59% of researchers are regularly using generative AI (GenAI) for tasks related to vulnerability discovery.
The emergence of "hackbots", or fully autonomous agents, is influencing the landscape. These agents submitted over 560 valid reports, with a 49% success rate. HackerOne notes that while hackbots can discover surface-level vulnerabilities, such as cross-site scripting flaws (XSS), deeper and more complex security issues continue to require human creativity.
"AI demands a different approach to risk and resilience. AI vulnerabilities increased by more than 200% this year, while enterprises expanded AI security initiatives at nearly three times last year's pace. At the same time, a new generation of 'bionic hackers'-security researchers using AI to enhance their hunting abilities-are driving the discovery of security issues at unprecedented scale. The organisations that thrive will be those that evolve with AI and tap into the expertise of security researchers in both testing and response."
This comment from Kara Sprague, Chief Executive Officer of HackerOne, highlights the dual-edged impact of AI, driving both attack sophistication and defence mechanisms.
James Kettle, Director of Research at PortSwigger, spoke further on the evolving role of ethical hackers in the AI era, stating:
"Hackers are becoming builders. By crafting AI enhancements throughout our workflows, we're amplifying our unique tradecraft to hack deeper, faster. We are entering an era of bespoke automation, and the power of the crowd is growing. This is a rapidly emerging field of research, and we're just getting started."
Industry outlook
The report concludes that organisations are rapidly increasing the scope of their AI assets under security programmes, in parallel with a clear rise in both the volume and complexity of detected vulnerabilities. The intersection of AI with traditional and automated security methods is giving rise to a new class of hybrid, or 'bionic', hackers who leverage both human ingenuity and AI tools to address security challenges at scale.