News

Tech Insight : Keep One Step Ahead of Voice Cloning Scams

In this Tech Insight, we look at the rise of AI voice cloning scams, how they work, how they’ve already cost UK businesses dearly, and what practical steps you can take to protect your team and finances.

A Convincing New Threat

Artificial intelligence (AI) has changed the game for fraudsters, affording them new and more sophisticated opportunities. For example, with as little as 10 seconds of recorded speech, scammers can now create eerily accurate replicas of someone’s voice, whether it’s your boss, a family member, or even your own voice!

All Too Real

Once the preserve of science fiction, AI-generated voice scams are now alarmingly real. For example, in one high-profile UK case, a CEO’s voice (of a UK energy firm – back in 2019) was cloned and used to trick an employee into transferring £200,000 to a fraudulent account. The call was convincing, urgent, and executed with chilling precision, with the result being that the money was gone in minutes.

Unfortunately, this certainly wasn’t a one-off. For example, it was clear from the beginning of 2024 that AI voice scams were on the rise across the UK and globally. For example, according to the FBI in the U.S., senior citizens there had lost $3.4 billion to fraud (in 2023), with AI making these scams more “believable and scalable.” Also, Starling Bank reports that over a quarter (28 per cent) of UK adults have been targeted by AI voice scams in the past year, yet nearly half remain unaware this type of fraud even exists. It seems, therefore, that the technology is outpacing public awareness.

When Did This Start?

While telephone scams have been around for decades, this new twist is voice cloning. Using generative AI tools (many of which are freely available) means fraudsters can replicate someone’s speech patterns, tone and even emotional inflection. The result is a synthetic voice so convincing that even close family members or long-standing colleagues may not detect the ruse.

Although high-profile cases date back to 2019 (as mentioned earlier), it’s only since 2022 that voice cloning has become truly accessible to scammers. For example, companies like ElevenLabs and Microsoft have demonstrated advanced text-to-speech models capable of near-human performance, and cybercriminals have been quick to adopt them.

How?

These scams typically begin with audio scraped from social media, corporate videos, or even voicemail greetings. A scammer feeds that audio into a cloning tool, crafts a script, and then makes a real-time phone call (or sends a voice message) that sounds like it’s from someone the victim knows and trusts.

The Anatomy of a Voice Scam

The aim of the scam is always the same, i.e. money or obtaining sensitive (valuable) information. Here’s a brief example of how this type of fraud often plays out:

– Reconnaissance. The attacker identifies a target, e.g. typically a finance employee or executive assistant. They gather personal or corporate audio from YouTube, LinkedIn, or company webinars.

– Cloning. The voice is synthesised using AI. This takes minutes, not hours.

– The call. A fake crisis is created. Perhaps the “CEO” is stranded abroad and needs urgent help wiring money. Perhaps a family member has been in an accident. The voice sounds real, and it’s filled with stress.

– The ask. Under pressure, the victim sends funds or divulges login credentials. The scammer hangs up. It’s already too late.

In a recent experiment by cybersecurity expert Jake Moore, a cloned voice convinced a financial director to transfer £250, all within 15 minutes of setup. While this was staged for demonstration, it proves how easily such scams can succeed.

Why Businesses Are Prime Targets

While consumers are vulnerable, it’s businesses (particularly SMEs) that face the most serious risk. Impersonating a family member might net a few thousand pounds, but impersonating a company executive could yield hundreds of thousands.

Cybercriminals also know that businesses often have looser verification standards than banks. An email might not raise suspicion and a familiar voice on the phone even less so. Also, scammers know that adding urgency can mean that routine checks can be bypassed and, because the voice on the line appears to be someone in authority (like the MD or FD), staff are often too intimidated to question it.

Practical Steps to Protect Your Business

Fortunately, there are some clear and actionable ways to reduce your risk. These include:

1. Introduce a verification protocol for financial requests

Never authorise a payment or password change based on a phone call alone. Always require secondary confirmation, ideally in writing via a known secure channel.

2. Create an internal code phrase system

Set up internal “safe words” for your senior staff members/leadership team. These can be used during sensitive calls or unexpected requests to prove identity. Ensure everyone knows never to share the code until it’s been requested.

3. Train your team to pause and question

Run short awareness sessions. Teach staff how to recognise red flags i.e., urgency, secrecy, requests to bypass usual procedures. As a good rule of thumb, if it feels off, check it out.

4. Use call screening and voice detection tools

Enable spam filtering and voicemail screening on business phones. There are now tools that analyse voices for signs of synthetic generation (although this tech is still developing).

5. Limit public audio exposure

Think twice before publishing unedited video or voice content of your executive team online. This is because cloning starts with access to audio. Therefore, the less you share, the safer you are.

6. Lock down social media accounts

Fraudsters don’t just steal voices, but they also scan for job titles, family names, and routines. Encourage employees to keep LinkedIn profiles professional and avoid sharing holiday plans or personal updates on public platforms.

What If You Suspect a Scam?

If you receive a suspicious call or voicemail, hang up and verify. Contact the person directly using a trusted number or through another platform. If a payment has been made, inform your bank immediately and report the fraud to Action Fraud (UK’s national reporting centre for cybercrime).

Also, consider submitting a report to the National Cyber Security Centre (NCSC) or your local police cyber unit.

A Human Problem, Not Just a Tech One

AI may be the tool, but emotion is the weapon. Scammers rely on panic, fear and split-second decisions. If your staff are prepared, trained and empowered to question, your business is far less likely to fall victim.

As Ben Colman, CEO of US-based deepfake detection company Reality Defender, puts it: “Any strategy that relies on detecting voice glitches is now outdated. You won’t hear the difference. You need process-based defences, not your ears.”

What Does This Mean For Your Business?

The rise of AI voice cloning scams is a clear warning sign that digital defences alone are no longer enough. This is because these attacks don’t break through firewalls or exploit software bugs but they exploit trust, urgency, and the human instinct to respond quickly in a crisis. That’s what makes them so dangerous, and so effective.

For UK businesses, especially SMEs, the implications are obviously serious. With increasingly accessible AI tools and huge volumes of voice data publicly available online, any organisation could find itself targeted and, with voice-based scams becoming more refined, even cautious staff can be caught off guard. The financial losses can be significant, but the reputational damage and operational disruption that follow may be just as damaging.

It’s worth noting that this is not just a technology issue – it’s a leadership one. Business owners, directors, and department heads need to recognise that if someone can sound like them on the phone, the usual rules of communication need to change. Setting up internal code words, creating structured verification processes, and training employees to pause before reacting are all small steps that can make a big difference.

Other stakeholders also have a role to play. For example, regulators will need to keep pace with how generative AI is being misused, while technology providers must consider safeguards that prevent voice models from being abused. Insurers, too, may need to begin scrutinising how businesses prepare for this specific type of fraud, just as they do with phishing or ransomware.

It’s likely that businesses that fare best will be those that treat this risk as both a technical and human challenge. The most advanced AI tools in the world won’t help if staff still believe that a familiar voice guarantees authenticity. Trust is no longer a voice on the other end of the line but is a process, backed by policies and shared across the team.

If your business hasn’t already had this conversation, then now may be a good time to start because, when the call comes in, it won’t be obvious that it’s fake, and that’s exactly the point.


Don’t take our word for it, see what are our clients say

“What impresses us most is their ability to convey the issue whilst avoiding the technical jargon that those outside of IT really don’t understand.”

- Jason Honey -