SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image

Rise in AI-driven phishing sites targets crypto users

Tue, 5th Nov 2024

Experts at Kaspersky's AI Research Center have discovered an increase in the use of Large Language Models (LLMs) by cybercriminals to produce content for large-scale phishing and scam attacks.

According to Kaspersky, these threat actors, in their attempts to generate fraudulent websites en masse, often leave behind distinguishable artifacts such as AI-specific phrases. These features differentiate AI-created websites from those crafted manually. Most phishing attempts identified by Kaspersky thus far specifically target users of cryptocurrency exchanges and wallets.

In analysing resources, Kaspersky's experts identified several key characteristics that help discern when AI has been used to generate content for phishing or scam websites. One prominent sign is the presence of disclaimers and refusals to execute particular commands, a common phrase being "As an AI language model..." as seen in phishing pages targeting KuCoin users.

Examples include scenarios where the language model declines to act as a search engine or states its inability to perform logins on external sites.

Another notable indicator of LLM usage is the inclusion of concessive clauses, such as, "While I can't do exactly what you want, I can try something similar." This has been observed in phishing attempts directed at Gemini and Exodus users, where the language model declines to provide detailed login instructions.

"With LLMs, attackers can automate the creation of dozens or even hundreds of phishing and scam web pages with unique, high-quality content," explained Vladislav Tushkanov, Research Development Group Manager at Kaspersky. "Previously, this required manual effort, but now AI can help threat actors generate such content automatically."

LLMs are capable of generating not just text blocks but entire web pages, with artifacts appearing both in the text itself and within areas like meta tags, which describe a web page's content in its HTML code.

Phishing websites mimicking the design of legitimate platforms, like the one targeting the Polygon site, have been identified with meta tags containing messages from AI models, indicating that the text length exceeded the model's limit.

Other subtle indicators of AI involvement include the use of specific phrases such as "delve," "in the ever-evolving landscape," and "in the ever-changing world." Although these terms alone do not definitively prove AI generation, they can serve as further signals.

Examples of such language have been found on pages targeting Ledger and Bitbuy users, with phrases like "In the dynamic realm of cryptocurrency" and "the ever-evolving world of cryptocurrency" used to captivate potential victims.

LLMs often state up to which point the model's understanding of the world extends, typically expressed with phrases like "according to my last update in January 2023." This can appear on scam sites as a demonstration of currency or relevance but can serve as an identifier of AI involvement.

Combining LLM-generated text with tactics that complicate phishing page detection is becoming more common. Techniques include the use of non-standard Unicode symbols to obfuscate text, hindering detection by rule-based systems, as seen in phishing pages impersonating Crypto.com.

"Large language models are improving, and cybercriminals are exploring ways to apply this technology for nefarious purposes. However, occasional errors provide insights into their use of such tools, particularly into the growing extent of automation. With future advancements, distinguishing AI-generated content from human-written text may become more challenging, making it crucial to use advanced security solutions that analyze textual information along with metadata, and other fraud indicators," said Vladislav Tushkanov.

Kaspersky suggests several measures to guard against phishing: verifying the spelling of hyperlinks, inputting web addresses directly into browsers, and employing modern security solutions which offer safe browsing features. These precautions serve to protect users against dangerous websites and unscrupulous online activities.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X