The race is on to end voice cloning fraud



As AI-powered voice cloning boosts fraudulent scams, we sit down with ESET’s Jake Moore to discuss how to hang up on ‘hi-fi’ scam calls – and what the future holds in deepfake detection.

Breaking the Fake: The Race is On to End AI Voice Cloning Scams

Would you be lured by a fake call from your CEO asking you to transfer money? Like our colleague Jake Moore discovered it, You could. As computers with available computing cycles receive more and more training data, deepfakes get better and better. Give them a long CEO speech and they’ll gain inflection, tone, and other nuanced speech patterns that can eventually make them quite convincing.

Now, in the United States at least, there is a competition to break the fake, and hopefully find ways to provide defensive systems with ways to thwart a related attack. The challenge from the Federal Trade Commission (FTC) had competitors jointly hacking whatever technology they could find to thwart voice counterfeiting and win a $25,000 prize.

Competitions have been used for everything from spacesuit design to autonomous vehicles to solar technologies, and now aim to boost interest and participants in the deepfake defense arena, aspiring to create a race to protect consumers from the harms of AI-enabled voice cloning. Once registered through the portal, your ideas can compete against other approaches to hopefully win gold.

I always wonder if that can stop someone like Jake, so I asked him.


Q: Will businesses be able to defend against this threat using current and widely available technologies (and if so, which ones)?

Jake: Unfortunately, I don’t think the answer yet lies in countermeasure technologies to combat this evolving problem. We are still in the nascent phase of this new era of AI and many companies are building on the idea of ​​good, bad, and bad uses and capabilities. Technology is continually being developed to help those in need, but as with any nascent phase of a new technology, the rapid pace of such movement cannot keep up with evolving illicit requests for use to scam people and businesses.

Q: Same questions for individuals, think about pensioner scams, romance scams, etc. ?

Jake: AI has enabled fraudsters to carry out their actions on much larger scales than before, meaning the numbers game has become even bigger. When more people are targeted, the rewards reaped are much greater and without any real additional effort on the part of the fraudsters.

However, individuals need to continue to be more savvy about all scams, from classic frauds to current trends. In particular, with the birth of voice cloning and the falsification of likenesses with striking precision, people need to stop thinking that seeing (and hearing) is believing. Remaining aware that such scams are possible will give those targeted the confidence to question these communications more carefully and teach them not to be afraid to question their actions.

There are many tips on the internet offering awareness tips for such scams and people need to continually update themselves with the latest information to become and stay protected.

Q: What can institutional anti-fraud teams do to mitigate attacks against their users/customers, and who should pay if people get scammed?

Jake: By default, anti-fraud teams must provide training at all costs, which must be annual and one-off for everyone in an organization. Simulation attacks and war games also help to get the message across in an interesting way and introduce a serious problem into a more interesting dynamic, allowing for failure in a safe environment.

Once people experience the concept of a modern attack and witness the scale of this new technology, they will be more likely to remember the advice given during a real attack. AI is still very impressive, so seeing deepfakes in action in a safe environment gives the opportunity to show the potentially dangerous results it can also create without creating too much of a fear factor.

Q: This is clearly a game of cat and mouse, so who will win?

Jake: From the beginning, there has been a chase between “cops and robbers” where the robbers are usually a few steps ahead. However, as technology improves, we need to make sure we don’t fall behind and give fraudsters an even greater advantage.

It is essential that individuals have the right tools and knowledge to best combat these inevitable strikes, so that fraudsters do not always win. By staying up to date on the latest attack vectors and evolving scams, individuals and businesses have the best chance of defending themselves against this new wave of technology attacks.

One thing is certain: it is no longer a theoretical threat. Expect 2024 and beyond to be the time when fraudsters find new automated ways to launch voice attacks very quickly, especially in response to cataclysmic events where a trusted official “asks” you to do something thing. This will all sound convincing, with an air of urgency and with what appears to be multi-factor auditory authentication, but you can still get scammed, even if you “personally heard from a manager.”

Thanks for your time, Jake.

Leave a comment