SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image

Pillar Security reveals critical GenAI vulnerability risks

Fri, 11th Oct 2024

Cyber security firm Pillar Security has released its "State of Attacks on GenAI" research report, highlighting critical vulnerabilities in current Generative Artificial Intelligence applications.

The report is based on an analysis of more than 2,000 AI applications using real-world data collected over a three-month period. It identifies significant risks, including a high 90% success rate in attacks resulting in sensitive data leaks.

One of the key findings from the report is that 20% of jailbreak attack attempts successfully bypass GenAI application guardrails, posing a considerable threat to data integrity and confidentiality.

Analysis within the report reveals adversaries can execute an attack in an average of just 42 seconds, requiring minimal interaction—approximately five interactions—with GenAI systems to achieve their objectives.

Vulnerabilities are present at every interaction stage with GenAI systems, highlighting the necessity for comprehensive security measures to protect sensitive data, the report said.

The research notes an increase in the frequency and complexity of prompt injection attacks, as attackers employ more sophisticated techniques to circumvent existing security measures.

Dor Sarig, Chief Executive Officer and co-founder of Pillar Security, stated, "The widespread adoption of GenAI in organizations has opened a new frontier in cybersecurity.

"Our report goes beyond theoretical risks and, for the first time, shines a light on the actual attacks occurring in the wild, offering organizations actionable insights to fortify their GenAI security posture," they said.

The report also details top jailbreak techniques, such as 'Ignore Previous Instructions' where attackers manipulate AI systems to disregard initial programming, and 'Base64 Encoding' where malicious prompts are encoded to evade detection by security filters.

Primary motivations driving attackers include stealing sensitive data, proprietary business information, and personally identifiable information (PII), as well as bypassing content filters to produce disinformation, hate speech, phishing messages, and malicious code.

Sarig warned, "As we move towards AI agents capable of performing complex tasks and making decisions, the security landscape becomes increasingly complex.

"Organisations must prepare for a surge in AI-targeted attacks by implementing tailored red-teaming exercises and adopting a 'secure by design' approach in their GenAI development process," they said.

The report underscores the inadequacy of traditional static security measures against evolving AI threats. Jason Harrison, Chief Revenue Officer at Pillar Security, noted, "Static controls are no longer sufficient in this dynamic AI-enabled world.

"Organisations must invest in AI security solutions capable of anticipating and responding to emerging threats in real-time, while supporting their governance and cyber policies."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X