AI Tools Are Helping Mediocre North Korean Hackers Steal Millions
The emergence of AI hacking tools has stirred concerns about a future where automated systems empower anyone to uncover exploitable software vulnerabilities, essentially granting a digital intrusion superpower. However, in the current landscape, AI’s role in cybercrime is somewhat more commonplace yet still alarming, as it enables relatively unskilled hackers to conduct widespread, effective malware campaigns. This is exemplified by a North Korean cybercriminal group recently uncovered using AI to execute almost every phase of a massive operation that compromised thousands of victims to steal cryptocurrency.
A cybersecurity firm revealed the existence of a North Korean state-sponsored campaign known as HexagonalRodent, which deployed credential-harvesting malware on over 2,000 computers, primarily targeting developers involved in small cryptocurrency projects, NFT creation, and Web3 initiatives. By harnessing AI tools developed by US companies such as OpenAI, Cursor, and Anima, this group automated much of its hacking campaign, from malware coding to creating fraudulent company websites for phishing schemes. This AI-aided effort enabled them to pilfer up to $12 million in cryptocurrency within just three months.
The most notable aspect of the HexagonalRodent operation isn’t its sophistication, but how AI empowered an otherwise unskilled group to execute a lucrative theft mission for the North Korean state. According to the security researcher behind the discovery, these hackers lacked coding skills and infrastructure setup knowledge, relying heavily on AI to perform tasks they could not manage independently. This demonstrates how generative AI has become a force multiplier for such actors, enabling them to accomplish complex cyberattacks once limited to more capable hackers.
HexagonalRodent’s approach involved deceiving crypto developers with fake job offers from invented tech firms, complete with AI-generated websites. Victims were then tricked into downloading coding tests embedded with malware designed to infiltrate their systems and steal credentials, including access keys for cryptocurrency wallets. Despite their effectiveness, the hackers showed operational weaknesses by leaving parts of their infrastructure unprotected, inadvertently exposing malware creation prompts and victim tracking databases, allowing researchers to estimate the financial scale of the heist. The malware itself contained unusual markers often associated with AI-generated code, such as extensive English comments and prolific emoji use—signs that the software was likely produced by large language models rather than traditional coding practices.
While the malicious code fit typical malware behavior patterns detectable by standard security tools, many individual victims lacked such protections, providing a fertile ground for AI-crafted malware to succeed. This underscores how targeting less-defended users allows these campaigns to operate under the radar. Furthermore, the campaign highlights North Korea’s strategy of leveraging inexperienced IT workers by supplementing their limited skills with AI, thereby scaling up cyber operations with relatively untrained personnel.
Contrary to the idea that AI might reduce the number of hackers needed, this operation saw the involvement of up to 31 individuals, showing growth rather than downsizing as AI tools enable more operators to perform tasks that once required entire development teams. North Korea’s expanding reliance on AI for cybercrime is part of a broader pattern whereby the country employs state-backed cyber activities to fund its nuclear ambitions, develop infrastructure, and evade sanctions, operating like a state-sanctioned criminal enterprise. AI supports a range of illegal workflows including exploit development, social engineering, and infrastructure creation at scale.
Reports indicate that North Korean IT programs also use AI for deceptive practices such as fabricating false IDs, enhancing remote job interviews with deepfake technology, and refining social engineering efforts. Observations from multiple AI platform providers confirm that North Korean hackers exploit commercial AI services extensively, using them for coding, communication, and malware development. In response, these companies have taken steps to ban or block suspect accounts linked to such activities.
Despite the availability of AI tools, some providers maintain that these technologies do not grant fundamentally new hacking capabilities but instead increase the speed and scale of operations. Yet, the real issue lies in the practical application of AI to augment cybercriminal activities, enabling relatively unskilled operators to execute sophisticated campaigns efficiently. Security experts emphasize that attention should focus less on theoretical AI-enabled breakthroughs and more on addressing the tangible threats posed by state-backed groups already leveraging AI to expand their cyber capabilities rapidly.