Cybercriminals are leveraging AI for malicious use
Cybercriminals are leveraging artificial intelligence for malicious use, both as an attack vector and an attack surface, according to a new report.
Deepfakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.
A jointly developed new report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro looking into current and predicted criminal uses of artificial intelligence (AI) was released today. The report provides law enforcers, policy makers and other organisations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.
"AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology." says Edvardas ileris, head of Europol's Cybercrime Centre.
"This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems," he says.
For example, AI could be used to support:
- Convincing social engineering attacks at scale
- Document-scraping malware to make attacks more efficient
- Evasion of image recognition and voice biometrics
- Ransomware attacks, through intelligent targeting and evasion
- Data pollution, by identifying blind spots in detection rules
"As AI applications start to make a major real-world impact, it's becoming clear that this will be a fundamental technology for our future," says Irakli Beridze, head of the Centre for AI and Robotics at UNICRI.
"However, just as the benefits to society of AI are very real, so is the threat of malicious use. We're honoured to stand with Europol and Trend Micro to shine a light on the dark side of AI and stimulate further discussion on this important topic."
The paper also warns that AI systems are being developed to enhance the effectiveness of malware and to disrupt anti-malware and facial recognition systems.
"Cybercriminals have always been early adopters of the latest technology and AI is no different. As this report reveals, it is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works," explains Tony Lee, head of Consulting, Hong Kong - Macau, at Trend Micro.
"We're proud to be teaming up with Europol and UNICRI to raise awareness about these threats, and in so doing help to create a safer digital future for us all."
The three organisations make several recommendations to conclude the report:
- Harness the potential of AI technology as a crime-fighting tool to future-proof the cybersecurity industry and policing
- Continue research to stimulate the development of defensive technology
- Promote and develop secure AI design frameworks
- De-escalate politically loaded rhetoric on the use of AI for cybersecurity purposes
- Leverage public-private partnerships and establish multidisciplinary expert groups