AI-powered deepfake technology has advanced markedly in recent years. It has been used successfully to simulate real-world likenesses in audio and video clips, and now it has made its way to the cinema.
The release of Top Gun: Maverick in late May 2022 heralded Val Kilmer reprising his famous role as Adm. Tom “Iceman” Kazansky. The problem? Kilmer can no longer speak clearly. He lost his voice in 2015 following a battle with throat cancer, yet he appears—and speaks—in the action movie as if it never happened.
This was made possible by similar technology to that used in deepfake schemes. Sonantic, a tech startup based in the UK, harnessed artificial intelligence (AI) to recreate an authentic-sounding Kilmer voice. The company used old recordings of his voice and footage from previous movies to create a realistic image and voice of an aging Iceman:
The rise of deepfakes
Interestingly, it was Top Gun star Tom Cruise who inadvertently brought deepfake discussions to prominence over the past year. A TikTok account (@deeptomcruise) depicting a Tom Cruise deepfake went viral in early 2021, despite the account’s name and bio clearly indicating it is not Cruise himself.
The technology is so convincing that 61 percent of users are unable to tell the fake version from the real actor:
Read more on TechRepublic: Deepfakes: Microsoft and others in big tech are working to bring authenticity to videos, photos
How deepfakes work
Deepfake examples like these use two AI neural networks that compete against each other for iteration after iteration as the voice, image, or video is created. One network attempts to counterfeit an original photo, audio recording, or video. The other is tasked with spotting it as being fraudulent.
It might take millions of iterations to end up with a version that is imperceptible from reality. Modern processors and high-performance compute platforms can take care of all that work in a short time frame.
Read more on Developer.com: Introduction to Deepfake
The dark side of deepfakes
The problem with deepfake technology is that it won’t only be used for frivolous TikTok posts or movie cameos by aging or infirm stars—it is destined to become a valuable addition to the cybercrime arsenal. As such, deepfakes will be used to execute sophisticated social engineering attacks and penetrate security defenses.
“The use of AI to generate deepfakes is causing concern because the results are increasingly realistic, rapidly created, and cheaply made with freely available software and the ability to rent processing power through cloud computing,” said Kelley M. Sayler, an analyst in advanced technology and global security at the Congressional Research Service. “Even unskilled operators could download the requisite software tools and, using publicly available data, create increasingly convincing counterfeit content.”
Far from being mere speculation or scaremongering, there are already examples of bank heists and fraud from the UK and the Middle East where cloned voice technology managed to steal tens of millions of dollars. All it took was a convincing deepfake voice recording of a trusted client or associate to dupe finance personnel into transferring massive amounts of funds.
Related: Moving Beyond Cybersecurity to Cyber Resilience
Deepfake cybersecurity implications
As deepfake technology becomes more advanced and accessible, expect security awareness training to soon incorporate deepfake tips. Most importantly, employees should understand the importance of vetting video, image, audio, and news information for authenticity.
“Just like any other form of social engineering, deepfakes can be used to make you believe something that isn’t real because it seems to come from a credible source,” said Hank Schless, senior manager of security solutions at cybersecurity vendor Lookout. “Social engineering is constantly evolving as the ways people interact with each other change. Nowadays, with most information consumption coming through video, it makes sense that deepfakes are being used more broadly.”
Read next: 5 Best Practices to Prevent Cyberattacks