How AI deepfake tools can deceive you

How AI deepfake tools can deceive you

SHARE IT

14 November 2023

The Beatles have once again delighted millions of fans around the world by releasing a new song with the help of artificial intelligence, combining parts of an old recording and improving the sound quality. The world welcomed the band's new song with joy, but there's a darker side to using AI to create deepfake voices and images.

Fortunately, such deepfakes - and the tools used to create them - are not currently well-developed or widespread, however, their potential for use in fraud schemes is extremely high and the technology is constantly evolving

What can voice deepfakes do

Open AI recently introduced an Audio API model that can generate human speech and text with voice input. So far, only this Open AI software is the closest to real human speech.

In the future, such models may become a new tool in the hands of attackers. The audio API can reproduce the specified text by voice, and users can choose which of the suggested voice options the text will be pronounced with. The Open AI model, in its current form, cannot be used to create deepfake voices, but it is indicative of the rapid development of voice generation technologies.

Today, there are practically no devices capable of producing a high-quality deepfake voice that is indistinguishable from real human speech. However, in recent months, more tools for creating human voice have been released. Previously, users needed basic programming skills, but now it is becoming easier to work with such tools. In the near future, we can expect to see models that combine both simplicity of use and quality of results.

Fraud using AI is unusual, but examples of "successful" cases are already known. In mid-October 2023, American venture capitalist Tim Draper warned his Twitter followers that fraudsters could use his voice in scams. Tim shared that requests for money made with his voice are the result of artificial intelligence, which is apparently getting smarter and smarter.

How can you protect yourself?

So far, society may not perceive voice deepfakes as a potential cyber threat. There are very few instances where they are used with malicious intent, so protection technologies are slow to emerge.

For now, the best way to protect yourself is to listen carefully to what the person who has called you on the phone is saying. If the recording is of poor quality, has noises and the voice sounds robotic, this is enough to make you distrust the information you hear.

Another good way to test if the caller is human is to ask them personal questions. For example, if the caller turns out to be a voice model, a question about his favorite color will leave him without an answer, as that is not what a scam victim usually asks. Even if the attacker dials in and manually reproduces the answer at this point, the time delay in response will make it clear that you are being deceived.

Another safe option is to install a reliable and comprehensive security solution. While they cannot 100% detect deepfake voices, they can help users avoid suspicious websites, payments and malware downloads by protecting browsers and checking all files on the computer.

View them all