It’s a nightmare scenario for anyone: a distressed call from a loved one telling you they’ve lost their wallet and need your help. They beg you to transfer money to their account and because you want to help them, you do so.
But what if the familiar voice on the other end of the line is not who you think it is? What if, in reality, this was part of a clever scam involving artificial intelligence and a cloned human voice?
How are scammers using Artificial Intelligence?
Artificial Intelligence (AI) refers to any technology that enables a computer to perform or ‘think’ more like a human. AI is becoming a larger part of our lives because it has a wide range of uses, improving efficiency and automation (where AI runs through a series of processes in a short period of time, instead of it being managed by a human). You may not even realise how AI is being used to supercharge daily activities, from using your mobile phone to shopping.
Unfortunately, scammers have also clocked on to the potential functions of AI and are utilising this technology to create even more convincing ways of tricking people to exploit their money, data, and personal information.
It’s difficult at the best of times to keep up with how quickly technology evolves, let alone how scammers abuse it. Here, we set out some of the main ways AI is being used in scams, and how you can protect yourself against them.
What AI scams should you look out for?
Voice cloning
The scenario described at the start is based on a real scam that uses AI to impersonate human voices. Scammers can use AI programmes to clone a person’s voice and create what is known as an ‘audio deepfake’. These are high quality voice clones that can replicate language, tone and even emotion, making them even more distressing – and even more convincing.
In extreme cases of voice cloning, consumers have reported being asked to send money to cover legal fees after hearing what sounds like real pleas from a family member who say they have been in an accident or been arrested.
More commonly, scammers can use voice cloning to impersonate your bank, salespeople, or customer helpline workers. This makes it harder to distinguish as a con, compared to scams that use a more obviously fake automated ‘robot’ voice that you are likely familiar with.
It’s also possible for your voice to be cloned and used in such a scam. In fact, AI can replicate a person’s voice with just a three second recording, and it’s been proven that audio deepfakes can dupe banks’ biometric protection systems to gain access to personal accounts.
The idea a voice could be spoofed in this way is unsettling and seems like something from a sci-fi movie. That’s why voice clone scams are incredibly effective, as they prey on our worst fears.
Phishing attempts
Typically, phishing emails pose as a real company like Amazon or Apple, but include unsafe attachments or links, or request money and information. They may include a deadline for an unpaid bill or to update your account details. This sense of urgency can be compelling, but these emails are often written with poor spelling and grammar, alerting you that it is a malicious attempt by cybercriminals to access your personal details or download a virus onto your device.
You may have heard of a programme called Chat GPT. The large language model chatbot has a remarkable ability to interact with dialogue, much like a real person and can be used to generate text or answer questions. Experts warn of the potential for fraudsters to misuse programmes such as this to write even more realistic phishing emails and fool even more people than before.
Find out more about ‘phishing’ and what is here.
It’s likely that you’ve already encountered a phishing email; research from the Office for National Statistics (ONS) revealed half of all adults in England and Wales reported receiving one last year. There are risks for businesses too, with 83% of UK companies identifying phishing attempts at the most common cyber-attack.
With AI improving scammers’ capabilities, it’s even more important to be aware of how common and realistic phishing emails can be.
Hacking
Another possible abuse of AI is its ability to write sophisticated yet malicious malware to bypass security measures and crack passwords. In AI’s current form, an expert cybercriminal is likely able to teach a programme to write malware by training and correcting the code.
With scams however, where there’s a will, there’s a way. The concern here is that eventually AI technology will evolve so that someone with ill-intent, who lacks the technical skill to create malware could use an AI programme to do so.
Essentially, AI can do the hard work and all a scammer needs to do is apply the technology by developing engaging scenarios such as these common scams.
Spotting the warning signs: protecting against AI scams
New types of scams are alarming, but it is important to remember how you can protect yourself against them.
- Make sure you use secure passwords that include a combination of upper and lower case letters, numbers and symbols. For more tips on creating a secure password, A-Plan has a handy guide here.
- You can also set up two-factor authentication on websites and apps for extra protection. For example, when logging on to Facebook with your email and password, two-factor authentication means you will also need to enter a code sent to your mobile phone via text. This requires a secondary level of information, to deter and prevent a scammer from gaining access to your account.
- Verbal passwords between your friends and family can also help to differentiate real emergencies from fake ones concocted by scammers. A secret safe word could be something that only has meaning to your loved ones, such as a phrase from a TV show you all enjoy or an unusual yet memorable word that stands out in the context of a conversation. You can then ask for it to confirm who you’re speaking to if you get a strange phone call asking for money or personal data. Just like your ‘online’ passwords, update these if you suspect they have been breached.
The best news is that humans are good at spotting these kinds of scams, thanks to our intuition. It’s good to be wary of unexpected work or personal calls, especially from unfamiliar numbers and especially calls asking for money or sensitive information such as your address, date of birth or middle name.
If you find yourself in a scenario like the ones set out above, you can always make up an excuse to ring them back, and then call your friend, family member, or bank yourself directly to confirm. Remember to step back and ask yourself if the situation makes sense. If something seems too good to be true, it often is. Equally if something dramatic happens out of the blue, take a second to consider things rationally and ask yourself the likelihood of the situation.
How to report a scam
Anyone can fall victim to a scam, so please don’t feel too embarrassed to report it if you do. By sharing the details, you can help protect and educate others.
You can report your experience to the Which! Scam Sharer and forward suspicious emails to Report@phishing.gov.uk.
Now that you are more aware of the types of AI scams being operated, please consider sharing this with your friends and family.
Would you prefer to speak to a real person? Our in-branch advisors are here to help, whether over the telephone or simply pop into your local branch at a time to suit you.
Sources: ONS, NCSC, The Guardian, Gov.uk