ChatGPT: A Fraud Fighter’s Friend or Foe?

It seems that everywhere you go, someone is talking about generative AI. 

There’s no denying that the technology will have a massive impact on the way businesses and individuals operate. From writing emails to providing statistics to flesh out a research paper, everyone has a way to benefit from generative AI. 

In retail specifically, online merchants have been using generative AI to customize consumer experiences. From personalized discounts, to shopping assistants bots, the possibilities in the space are virtually endless. 

However, in recent weeks, some of Big Tech’s major players and even legislators have expressed concerns. Elon Musk, Steve Wozniak and Jaan Tallin, amongst other prominent executives, issued an open letter calling for a pause on further AI development until more is known about its potential dangers. We’ve even seen some governments react swiftly, with Italy being the first country to ban ChatGPT. 

As a fraud fighter who uses AI and machine learning technologies, I have a curious interest in digging deeper into ChatGPT: is it a fraud fighter’s friend or foe?  

ChatGPT, Meet Social Engineering

I am biased toward focusing on technical and analytical challenges. Fraud ring doubling down on address manipulation? New sneaky trick to increase the payoff of returns abuse? Working out why catching amateur fraudsters is more challenging than you’d think? I’m on it. This is the stuff that keeps me in this industry.

The reality of online crime is that the weakest link is often a human one. Humans may be bored, worried, stressed, inattentive, desperate, and scared — and a clever fraudster can exploit all these emotions. ChatGPT and its generative AI friends will inevitably become part of this aspect of the ongoing fraud war.

Here are just a few of the ways I see this playing out:

  • Pig butchering scams: A nasty term for an ugly scam, this is when people are tricked into investing in fake stocks or on fake investment apps. Some victims lose thousands or even hundreds of thousands of dollars to these scams. The victims are lulled into a false sense of security by relationships developed with a scammer via text messages. ChatGPT and similar AI bots are friendly, conversational, and convincing. They’re ideal for building, at the very least, the initial relationships for pig butchering, especially since these typically follow a script anyway. 
  • Romance scams: Working on a similar principle, clever generative AI chatbots are a good substitute for low-grade human scammers in a romance scam. Much of the chat is formulaic, as you’ll see if you search for examples of victims describing their experiences. You could have one human supervising several chatbots, probably without losing much in terms of the scam’s success. 
  • Business Email Compromise (BEC) schemes: An old favorite with fraudsters, BEC is still going strong. It’s evolved over the years, and today’s scam emails are often personalized to match the target’s company, role, and the tools or programs their company uses. Generative AI will have no trouble generating precisely this kind of email — and it’ll create new ones for every prompt, making searching for reports of the email you’ve just received more difficult. 
  • Deepfake phishing: I’m thinking of tailored phishing. You know those times when an employee is tricked into sending large sums of money because they believe their boss or the CEO told them to? How much more convincing will those attempts be when the fraudster can ask generative AI to create an email or message or even voice message in the style of that executive? The one whose written opinions, interviews, and panel discussions are easy to find online and thus easy for AI to find and use?

Streamlining and Scaling Fraud

The generative AI explosion is still very new. It all started after the fraud ring that made the holiday season so difficult for many US-based physical goods merchants in 2022. But for me, all of the ChatGPT/Bard/Bing, etc. discussion happens within that context. 

The Master Manipulators, as the fraud ring came to be called, were largely effective because they were operating at a considerable scale.

Now imagine that hand-in-hand with generative AI. The scale just kicked up another level. 

Then there’s refund claims abuse, in which fraudsters specializing in this area work out which tricks are most effective against specific merchants and even against particular customer service agents and act accordingly. Generative AI will also come in handy for cases where this can be handled via chat or email. 

This is Not a Drill

ChatGPT has already been used “in the wild” to quickly create all the materials for a fraud scam. Like other uses of ChatGPT and its competitors, “prompt engineering” is critical: you must know what to ask for. But, like using search engines effectively, that skill can be learned. Moreover, it’s a skill that doesn’t require any special technical knowledge or ability and can be done by amateurs, “script kiddies,” and people in a hurry.

It’s kind of like the democratization of fraud attack materials. And it’s already happening. 

In some ways, this is largely the expansion of the Crime-as-a-Service industry that already dominates the online criminal ecosystem, with fraudsters able to buy (or buy access to) stolen data, bots, scripts, apps to simplify identity switching, and more. 

The difference is this is all “homegrown.” Someone doesn’t need much understanding of the ecosystem to be able to use generative AI to make their fraud faster, easier, and more effective.

The Real Worry of the Reality Check

For now, the enticing thing about ChatGPT and others like it is that they feel ripe with possibility and potential. Right now, they’re buggy, inaccurate, and unreliable – despite being impressive and fun to use. Chat-based AI reports nonsense or “hallucinates” imaginary things. Image-based generative AI struggles to draw human hands. But the question everyone is asking is – if this is now, what will they be able to do next year?

It’s an exciting thought but also a frightening one. A survey by Mitre Corporation and the Harris Poll found that 78% of Americans are at least somewhat concerned that artificial intelligence can be used for malicious purposes, and most support strong regulations governing AI. Considering the inevitable criminal applications, the sense of danger only gets stronger. 

With fraud prevention, we know that getting the machine learning and the technology right is only half the battle. What makes fraud prevention genuinely effective is when the technology is guided by and informed by the research and intuition of human experts. 

So, right now, it’s unclear whether or not ChatGPT will become our friend or foe. But, we’d better start preparing ourselves for the latter.

About the Author

Doriel Abrahams, Head of Risk, U.S., Forter.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW