Criminals are exploiting AI to create more convincing scams
Shepherd IT Criminals are exploiting AI to create more convincing scams
June 12, 2024

I’m Lewis, the director of Shepherd IT. I had a passion for technology since I was a young boy, I excelled in IT and computer science at college and started my career as a technology analyst helping people with their businesses IT problems.

One of the many amazing features of the latest generation of AI technologies is their ability to sound remarkably human.

AI chatbots can be instructed to generate text that you would never guess was authored by a machine. And they can keep doing so swiftly and with little human assistance.

As a result, it’s not surprising that cybercriminals have been employing AI chatbots to make their own lives simpler.

The three primary ways hackers have discovered to employ the chatbot for nefarious purposes have been highlighted by police.

⦁ Better phishing emails

Until now, many phishing emails were easy to detect due to poor spelling and language. These are designed to fool you into clicking on a link that will download malware or steal your information. AI-written content is far more difficult to detect because it is free of errors.

Worse, crooks can personalize each phishing email they send, making it more difficult for spam systems to detect potentially dangerous content.

⦁ Spreading misinformation

“Write me ten social media posts that accuse the CEO of the Acme Corporation of having an affair. Mention the following news outlets” Spreading misinformation and disinformation may not appear to be an immediate threat to you, but it could lead to your employees falling for frauds, clicking malware links, or even harming your company’s or team members’ reputations.

⦁ Creating malicious code

AI can already produce fairly good computer code and is improving all the time. It could be used by criminals to generate malware. It’s not the software’s fault; it’s simply performing what it’s instructed, but it remains a possible hazard until AI creators find a solid solution to protect against it. The designers of AI technologies are not to blame for criminals exploiting their strong software. OpenAI, the author of ChatGPT, is attempting to prevent its products from being used maliciously.


What this does show is the need to stay one step ahead of the cyber crooks in everything we do. That’s why we work so hard with our clients to keep them protected from criminal threats, and informed about what’s coming next.

If you’re concerned about your people falling for increasingly sophisticated scams, be sure to keep them updated about how the scams work and what to look out for.

If you need help with that, get in touch.