Most people have come across "deepfakes" while surfing the web. When it comes to deepfake voice, convincing snippets are so good that they leave us convinced that they’re real. For example, we might see our favourite star's personal opinions, only to realise later that they’re masterfully crafted fakes. What’s worse than the initial feeling of being fooled, is that the tech-savvy criminals that create them can also exploit the individuals or organisations that are viewing/listening to them.
In this article, we explore the evolving role of the human voice in verifying identity and establishing trust. We’ll discuss voice cloning, speech recognition and artificial intelligence (AI), the risks of trusting what you hear, and how to protect yourself and your organisation in the age of deepfakes.
With fraud attacks against financial institutions increasing by 269% over the previous four years, staying ahead of scammers and cybercriminals is ever more important. In this article, we explore voice cloning, speech recognition and artificial intelligence (AI), the risks of trusting what you hear, and how to protect yourself or your organisation in the age of deepfakes.
From futuristic voice recognition to deepfake voice risk
These days, voice recognition helps with fraud detection, protects access to financial accounts, and the most common application of speech biometrics, the ubiquitous contact centre. But can we trust what we hear? Or can new AI-based deepfake voice tools render our trust in voice as useless as simple passwords?
Speech recognition works by breaking down recordings into segments of frequencies, using these to create a unique "voiceprint" to identify different tones and inflexions, which can serve as an identity verification artefact.
Deepfake voice in practice
In the 90s, movies showing voice authentication made logins as easy as speaking. At the time, this sounded pretty futuristic. However, movies like Sneakers (1992) offered ways (in theory) to spoof an individual voice. As with any type of security, it’s only as good as the least secure point, and the human element is often the weakest link in the chain.
As well as compromised security, being duped by a deepfake voice, or other deepfake scams can be costly. In October 2021, Forbes reported on a $35 million bank heist where the company director's voice confirmed emails and authorised the transfer. The lawyer hired to coordinate the procedures believed everything was legitimate and began making the transfers. Forbes also reported about an energy company in the UK that fell victim to a similar ruse in 2019.
Beating back the fakes - technology’s role in protection
Today, grabbing people's voices with portable tape recorders has been replaced by more advanced algorithms utilising existing voice data to create live calls.
Websites are freely available to make simple voice messages from your favourite personalities. Just go to sites like Resemble or Descript, or check out how pranksters are creating deepfake videos like these that show just how easily we can all be tricked into thinking something we see (or hear) is real, even when it's not.
To protect clients, firms are deploying even more advanced AI models that look for physiological qualities like tone, pitch and volume, and behavioural attributes such as inflexions and accents - and any potentially generated voice. These advanced AI technologies attempt to spot shaped audio or deepfake voice in real-time.
One of the leading software AI vendors, Verint, uses voice biometrics and other predictive factors to identify professional scammers through a database of known fraudulent voiceprints. These criminals can be detected even if they answer security questions and dupe agents.
In the world of finance, trading desks can also use identity authentication. Increasingly more vendors feature this technology to recognise unique vocal characteristics, or "voiceprint," of enrolled customers in seconds using live call data, helping to reduce the number of security questions and average call-handling times.
Using compliance as a way to avoid deepfake voice scams
In financial markets, regulators require firms to follow specific compliance protocols to ensure voice interactions are sound and are not enabling fraudulent activity like market abuse.
Voice collaboration and compliance providers like Speakerbus are extending their portfolios and partnerships to address the increasing importance of compliance for clients.
These solutions allow users to manage their compliance and security obligations more efficiently, using automation to identify and isolate malicious or fraudulent behaviours.
How to avoid being duped by deepfake voice
Technology is only one link in the chain to combat fraudsters. Staff awareness and training also play a key role in comprehensive defence for organisations. Identifying attack vectors and strategies to prepare them to question out-of-character requests - even in cases where they may recognise the voice.
Understanding and following company policy and procedures are vital. For example, even if you receive an email saying a senior C-level manager needs something done over the phone, once you receive the call, can you call them back or jump on a video call to confirm it's them? Can anyone else validate the request?
Whatever your role is, consider how you can vet calls to ensure you’re talking to the real person and not a deepfake voice. Consider using these four areas below to aid your decisions:
- Something you/they know, e.g., a password or verification question using unique data
- Something you/they have, e.g., an authenticator app, token or certificate on your device
- Something unique about the individual, such as fingerprints or voiceprints
- Something they've done, using a recent transaction, trade or position
On the client side, let's say you’re called by your financial advisor or broker with the latest stock tip or buy recommendation. Be proactive and upfront with the vetting process, spending time, say, catching up on personal lives before giving out financial information. Validate who they say they are by discussing what only they know about you and any previous transactions.
Summary
With video and audio samples being freely uploaded to digital databases every day, like recordings of Zoom presentations, company videos and social media posts, this form of fraud will only worsen as technology gets more sophisticated and fraudsters continue to abuse it.
There's no foolproof way to avoid deepfake scams, but being aware and validating who you’re talking to will reduce your exposure to the risk. Taking precautions and being vigilant will help you avoid deepfake voice fraud, even as the technology and malicious intent behind its use become more prevalent.
Key Takeaways:
- Deepfakes are really good and continue to get better, while the tools that enable them are less complex than you might imagine.
- Compliance firms do offer some workflow and protection opportunities when monitoring biometrics.
- Follow official procedures, even when asked to bend the rules.