A viral deepfake video featuring Florida Gov. Ron DeSantis announcing his withdrawal from the 2024 presidential race has raised a lot of questions and concerns.
The user who initially shared the video on X, formerly known as Twitter, clarified that it was a fake creation. However, it didn’t take long for other accounts to get hold of the video, presenting it as legitimate “breaking news.”
“These videos are downright dangerous,” said cybersecurity expert Alan Crowetz. “They’re being used, in some ways as jokes, but have potentially devastating consequences.”
Deepfakes are artificial images or videos generated using specialized machines. They can alter the appearance and voice of anyone, such as DeSantis.
“After last week’s events, including my poor performance at the debate as well as President Trump rejoining X, I’ve realized I need to drop out of this race immediately,” DeSantis appeared to be saying in the fabricated video.
Although the video might seem convincing on the surface, there are some tell-tale signs that it is a deepfake. For example, you might have noticed DeSantis’ lack of breathing sounds, blurry teeth, and audio-video sync issues. However, the concern is that not everyone can easily spot these red flags, especially when they’re not actively looking for them.
“Something like this can have an impact on the entire presidential election and may even sway a close vote enough to go a different direction,” Crowetz said, “so there’s a lot of incentive.”
Crowetz emphasized the importance of cybersecurity education in an era where misinformation can spread rapidly online. Additionally, he believes digital users must maintain a high level of skepticism when consuming all forms of content.
“When in doubt, don’t trust it. Check it out, other sources. Don’t believe everything you see, look at where it came from. I can’t emphasize it enough. Don’t be overconfident that you can’t be tricked,” Crowetz said.
Deepfake technology is advancing, but there are emerging tools and methods for detecting fake content. For example, Google recently introduced a new policy mandating that all election advertisers clearly disclose when their ads contain AI-generated content. It’s set to take effect in mid-November, just in time for caucus and primary season.
“Pay attention to news stories about artificial intelligence because the one thing the bad guys are really good with is using technology and every means possible for the most clever way of getting people,” Crowetz said. “They like to take advantage of things that are time-sensitive. The presidential elections are a big thing. They’re going to take advantage of that. Tax time, they’re going to take advantage of that.”
For additional cybersecurity guidance from Crowetz and his team at InfoStream, visit: infostream.cc/cybersecurity-for-businesses.