SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image
Mindgard unveils tool to assess cyber risk in AI systems
Fri, 9th Feb 2024

Mindgard, a leader in AI cybersecurity, unveiled its new free online tool, AI Security Labs, on 7th February 2024. This instrument is designed to enable engineers to evaluate the cyber risk to AI systems, including large language models such as ChatGPT. It will offer a crucial service in exposing previously undetected risks in the rapidly developing field of artificial intelligence.

The rapid development and adoption of AI by enterprises frequently leaves them exposed to new attack vectors which conventional security tools cannot address. The use of so-called foundation models, like ChatGPT, can introduce unforeseen risks. Until now, there has been no automated process capable of testing the potential impacts of attacks which this may invite.

Mindgard's AI Security Labs aims to bring these hidden vulnerabilities into the light of day. Many such risks remain undetected due to the complexity involved in their identification and the need for specialised skills. The costs and time involved in traditional AI penetration tests, when offered, are prohibitive, requiring extensive programming and testing by hard to find and highly paid teams. Additionally, any alteration in the AI stack, model, or underlying data requires a completely fresh test. This situation often leaves senior management oblivious to the potential impact of prospective disruptions.

AI Security Labs automates the threat discovery process. It provides repeatable AI security testing and dependable risk assessment in minutes rather than months, enabling engineers to select from a range of attacks on popular AI models, datasets and frameworks to assess potential vulnerabilities. These results offer insight on current threat possibilities in AI attacks and the potential for evasion, IP theft, data leakage and model copying threats,

Dr. Peter Garraghan, CEO/CTO of Mindgard and Professor at Lancaster University said, "Most organizations are flying blind when deploying AI, with no way to perform red teaming against emerging cyber risks. Until now, there has been nowhere for technical teams to learn about the risks to AI security. We created this free tool to empower engineers dealing with AI adoption, providing them with the knowledge and capabilities needed to properly evaluate the attack surface."

The rapid adoption of LLMs, such as the prominent ChatGPT 3.5, has led to the surfacing of potential attacks on AI systems. Threats such as data poisoning, where chatbots have been manipulated to swear or present anomalous results, have been observed. Data extraction is emerging as another risk, wherein an LLM exposes sensitive data on which it has been trained. Incidents of entire AI/ML models being copied are also on the rise.

"Established cybersecurity tools are ineffective against AI's new threat landscape," added Dr. Peter Garraghan. "Our free offering bridges that gap by putting test capabilities directly into engineers' hands, enabling them to secure AI before deployment."

Mindgard's AI Security Labs is now accessible via an online sign-up with no payment required, providing a cyber risk tool including over 170 unique attack scenarios and detailed reports on AI cyber risk. Mindgard is also planning on making its solution available on Azure marketplace, with Google Cloud Platform (GCP) and Amazon Web Services (AWS) to follow in the coming months.