Surge in Generative AI Tools for Cybercrime Sparks Concerns
A new breed of virtual assistant software has surfaced in underground forums, catering to "black hat" hackers seeking illicit gains. These emerging tools, harnessing the power of generative AI models akin to those behind ChatGPT, have adopted monikers such as "FraudGPT" and "WormGPT," promising functionalities spanning from crafting malicious software and phishing emails to constructing attack sites and pinpointing vulnerabilities. Notably, their effectiveness shines in facilitating business email compromise (BEC) attacks.
The foray of generative AI tools into the realm of cybercrime forums kicked off with the introduction of "WormGPT" in the middle of July. This tool appears to be honed for BEC attacks, possibly stemming from the GPT-J language model that emerged in 2021. A key feature lies in its capacity to generate professional-looking emails sans the need for linguistic proficiency in the target language.
Following shortly, in late July, "FraudGPT" emerged, boasting a wider array of capabilities. This iteration is billed as a creator of malicious code, an adept internet scanner, an assistant in developing hacking utilities, a generator of scam pages, and an enabler of training in the application of cybercrime tools, among other functions.
While these tools initially made their debut on mainstream "clearnet" internet forums, platforms known for their indirect discussions on black hat hacking that cater more to amateurs, they were ultimately expelled due to their overt nature. Consequently, the developer turned to Telegram to promote their offerings. The mind behind "FraudGPT," operating under the alias "CanadianKingpin12," also revealed the impending launch of two more generative AI tools: "DarkBART" and "DarkBERT." The former purportedly represents a twisted variant of Google's Bard AI, while the latter assumes the role of a comprehensive application trained on content from the dark web. Supposedly, these novel tools will feature integration with Google Lens, enabling input via both text and images. However, their efficacy and eventual release remain uncertain.
The trajectory of this nascent category of cybercrime tools is primed for expansion in the foreseeable future, whether through the specific generative AI models outlined or as a result of innovative developments. While current tools exhibit their prowess in generating emails resembling natural language, it's plausible they'll advance to serve more intricate objectives, such as orchestrating elaborate social engineering campaigns or identifying obscure zero-day vulnerabilities.
The surge in sophisticated cybercrime tools introduces the looming prospect of arming amateur hackers with capabilities to amplify their impact, augmenting their potential via a surge in attack volume. In parallel, an underground trade has emerged, focused on exploiting bona fide generative AI models and circumventing the barriers preventing them from engaging in illicit activities. This burgeoning trade involves cybercriminals exchanging engineered prompts for remuneration, mimicking the trade of pilfered login credentials. "DarkBERT," an initiative helmed by "CanadianKingpin12," seems designed to facilitate such access, potentially enabling cybercriminals to manipulate the original model, initially conceived to combat cybercrime by assimilating knowledge from the dark web. Worth noting, the legitimate "DarkBERT" project implements safeguards against misuse for BEC campaigns.
With the proliferation of easily accessible generative AI cybercrime tools on the rise, organizations face mounting pressure to bolster their defense strategies. While AI-driven security tools could offer some respite through automated threat detection, the crux of frontline defense remains heightened employee awareness. A comprehensive comprehension of these sophisticated capabilities, coupled with a heightened understanding of the escalated risk of intricate attacks targeting personnel, forms the bedrock of robust cybersecurity practices.