Artificial intelligence (AI) has been hailed as a groundbreaking technology with limitless potential. However, as with any innovation, there is a dark side. Cyber scammers are finding ways to exploit AI for their own malicious purposes, posing significant risks to online security. In this article, we delve into the intersection of AI and cybercrime, exploring how chatbots and other AI tools can be used by scammers and the potential consequences of their actions.
Among the various AI tools, chatbots have gained considerable popularity. ChatGPT and Google’s Bard are just a few examples of successful chatbot applications. Unfortunately, cybercriminals have recognized the potential of chatbots and are actively using them to generate phishing emails. Phishing, a widespread form of cyber scam, involves criminals impersonating someone or a company and sending emails containing links to counterfeit websites or malicious software.
The real danger lies in the fact that chatbots excel at generating authentic-sounding text, a crucial skill for phishing gangs. Traditionally, phishing emails were easily spotted due to their poor grammar and punctuation. However, chatbots can generate relatively clean and convincing prose, making it more challenging to identify phishing attempts. The accelerated pace of attacks enabled by AI heightens the risks for individuals and organizations alike. According to the FBI, there were 300,497 reported complaints about phishing scams in the past year, resulting in losses worth $52 million.
AI technology has advanced to the point where it can imitate voices with astonishing accuracy. This opens up a whole new avenue for scammers: deepfake impersonations. In a disturbing case, a mother received a phone call that she believed was her daughter begging for help. The caller demanded a $1 million ransom for her daughter’s release. However, it turned out to be an AI-generated voice imitation, and the mother quickly realized it was a scam. This incident illustrates the potential dangers of deepfake impersonations, where scammers can pose as loved ones or colleagues during phone or video calls.
With just a few seconds of someone’s voice recording, scammers can utilize online tools to create imitations capable of deceiving even cautious individuals. Jerome Saiz, founder of French OPFOR Intelligence consultancy, predicts that small-time scammers skilled in extorting credit card details through text messages will start using AI-generated voice impersonations. These scammers often possess technical expertise and are likely to exploit this AI capability for their illicit activities.
Beyond phishing and voice impersonations, AI also introduces risks through the creation of advanced malware. While solid evidence is limited, claims suggest that bespoke chatbots can generate and identify vulnerabilities in code, and even create malicious code. However, their current limitations prevent them from directly executing the generated code. Nevertheless, AI can serve as a coding tutor for scammers with limited skills, potentially enabling them to become more proficient in developing and deploying malware.
However, it is crucial to note that the extent of AI’s involvement in cybercrime remains uncertain. Experts like Shawn Surber from US cybersecurity firm Tanium assert that concerns surrounding generative AI are often based on fear of the unknown rather than specific threats. While the potential risks posed by AI-powered cyber scams should not be disregarded, it is essential to maintain a measured perspective rather than succumbing to unwarranted panic.
Artificial intelligence undoubtedly holds immense promise in various fields. However, its potential to facilitate cyber scams is a pressing concern. Chatbots, voice impersonations, and AI-assisted malware creation are just a few examples of the alarming possibilities. Combating these threats requires a multifaceted approach involving technological advancements, robust cybersecurity measures, and user vigilance. As AI continues to evolve, it is imperative to stay one step ahead of cybercriminals, prioritizing the proactive development of defensive strategies to safeguard our digital landscape.