![Story image](https://securitybrief.co.nz/uploads/story/2025/02/13/techday_8db2dd2df78233e4690c.webp)
LLMjacking exploits target AI models like DeepSeek, Experts warn
Cybercriminals are expanding their tactics in a rapidly growing security threat known as LLMjacking, targeting emerging artificial intelligence (AI) models such as DeepSeek.
The Sysdig Threat Research Team (TRT) reported that since they first discovered LLMjacking in May 2024, hackers have been exploiting large language models (LLMs) by stealing API keys and cloud credentials. This allows them to access costly AI models illicitly, mirroring the method of cryptojacking.
DeepSeek, an AI model released in December 2024, quickly became a target for these attacks. Hackers were found to have integrated DeepSeek-V3 into illicit operations using covert reverse proxies, or ORPs, immediately following its release. By January 2025, DeepSeek-R1 was also subjected to these unauthorised proxy operations, suggesting that cybercriminals are on the lookout for new AI models to exploit soon after release.
The methods employed by these attackers often involve OpenAI Reverse Proxy (ORP) setups. These setups disguise illegal API access through dynamic domain names and concealed IP addresses. Cybercriminals use stolen API keys to generate a profit by selling access through various online platforms and underground forums.
An example of this activity is evident in a proxy hosted at vip[.]jewproxy[.]tech, which offered access for USD $30 per month. Analysis of usage logs from this proxy indicated that millions of tokens were processed in a matter of days, resulting in victims being liable for cloud service costs amounting to tens of thousands of dollars. In particular, the AI model Claude 3 Opus was found to have accumulated nearly USD $39,000 in unauthorised charges.
Underground markets where LLMjacking is prevalent involve communities on platforms such as Discord and 4chan sharing tools and techniques. To evade detection, attackers use services like TryCloudflare tunnels to mask their infrastructure and employ techniques such as CSS concealment or password protection for proxy sites. Credential theft is a cornerstone of this strategy, with stolen API keys being tested with specialised scripts before use or sale.
Security experts recommend several strategies to mitigate the risks posed by LLMjacking. These include securing API keys with secrets management tools like AWS Secrets Manager or Azure Key Vault, using temporary credentials to reduce exposure, and monitoring unusual usage patterns with tools like Sysdig Secure. Further protective measures suggested include scanning repositories for exposed credentials using software such as TruffleHog and GitHub Secret Scanning.
As AI adoption continues to rise, the financial damage caused by LLMjacking makes it an alluring target for cybercriminals, causing significant billing issues for affected entities. The incident involving DeepSeek highlights organisations' critical need to enhance access control measures and establish robust monitoring protocols. With cybercriminals continuously advancing their techniques, the security sector is urged to remain vigilant and proactive in defence measures against these advanced threats.