SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image
Recorded Future reveals potential AI exploitation in 2024
Thu, 21st Mar 2024

Recorded Future, known as the world's leading intelligence company, has revealed frigtening insights on how cybercriminals could potentially exploit artificial intelligence (AI) in their operations. The company's latest report uncovers various malicious use cases of AI that organisations should watch out for in 2024.

The report, titled "Adversarial Intelligence: Red Teaming Malicious Use Cases for AI," discusses a range of use cases, including the use of deepfakes to impersonate executives, influence operations masquerading as legitimate websites, self-augmenting malware that can evade YARA (a tool commonly used by malware researchers), and aerial imagery reconnaissance pertaining to critical infrastructure and sensitive industries such as the defence, government, energy, manufacturing and transportation sectors.

The research was conducted by Recorded Future's threat intelligence division, Insikt Group, which tested these uses to understand the power and the limitations of the current AI models. These models ranged from large language models (LLMs), capable of generating human-like text, to multimodal models, capable of understanding images alongside text, and text-to-speech (TTS) models.

One of the key findings includes the potential use of deepfakes to impersonate executives. The research found that open-source capabilities allow for the creation of deepfake video or audio clips, using publicly available footage or audio of the individuals. These deepfakes can be trained using short clips less than a minute long. However, creating live clones would require bypassing consent mechanisms on commercial solutions, due to latency issues on open-source models.

Another concerning finding was the potential for AI to be used for influence operations, impersonating legitimate websites. AI could be used to generate and spread disinformation at scale, targeted to specific audiences. It could also aid in producing complex narratives to further disinformation goals and drastically reduce the cost of content production compared to traditional means. However, creating believable spoofs of legitimate websites would require significant human intervention.

The report also shed light on how AI could be used to develop self-augmenting malware capable of evading detection from YARA. Although current generative models face challenges in creating syntactically correct code, there is a potential risk. Lastly, multimodal AI can be employed to process public imagery and videos to identify equipment and location-specificities of industrial control system (ICS) equipment.

In response to these findings, an anonymous spokesperson from Recorded Future's Insikt Group stressed the need for organisations to stay vigilant. "Executives' voices and likenesses are now part of an organisation's attack surface, and organisations need to assess the risk of impersonation in targeted attacks," they advised, also recommending the use of various communication methods and verification means. Furthermore, they suggested that organisations should invest in multi-layered and behavioural malware detection capabilities and exercise caution over publicly accessible images of sensitive equipment and facilities.