Generative AI Report: FraudGPT – AI Powered Fraud Begins

Word out on the street is that FraudGPT has launched. It’s an AI bot exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, and creating fake identities, and it signals the first major instance of a published fraud-focused AI tool. FraudGPT follows in the footsteps of WormGPT and is found on various dark web marketplaces and Telegram channels where it is has been circulating since at least July 22, 2023 for a subscription cost of $200 a month (or $1,000 for six months and $1,700 for a year).

The new tool, which already has confirmed sales and multiple reviews, could empower everyday individuals to become cybercriminals, generating an entirely new wave of fraudsters that businesses will governmental agencies need to thwart. Seven companies — Google, Microsoft, Meta, Amazon, OpenAI, Anthropic and Inflection — convened at the White House recently to announce the voluntary agreements for the safe development of AI. 

The author of FraudGPT indicates that the tool could be used to write malicious code, create undetectable malware, find leaks and vulnerabilities, and that there have been more than 3,000 confirmed sales and reviews. The specific large language model (LLM) used to develop the tool is currently unknown.

Cybersecurity expert Ari Jacoby, CEO of Deduce, Inc., believes that AI-powered fraud will invalidate legacy fraud-prevention tools and that a new wave of detection and prevention will be needed to beat the sophistication generated by AI tools. Top of mind? Utilizing AI for good, empowering companies with data-powered countermeasures. Secondly, measuring and monitoring large data patterns to determine waves of fraud instead of focusing on individual vulnerabilities.

Our friends at Slashnet summarized the new FraudGPT findings in this way: “The same reformed black hat computer hacker who helped SlashNext expose WormGPT has now gained confirmation from the creator of FraudGPT that at least one more chat bot is currently under development, and is being referred to as “DarkBART.” DarkBART is supposedly based on DarkBERT – a large language model developed by a team of South Korean researchers with the intention of helping to fight cybercrime. The threat actor known on dark web forums as “CanadianKingpin12” claims to have gained access to DarkBERT and also claims that his new bot will have internet access and can be integrated with Google Lens.”  

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW