Tech Made Simple

Hot Topics: Holiday Gift Ideas | How to Fix Bluetooth Pairing Problems | How to Block Spam Calls | Snapchat Symbol Meaning

We may earn commissions when you buy from links on our site. Why you can trust us.

What Happened to the Deepfake Election?

by Pete Pachal on October 17, 2024

Deepfakes are easy. We all know this. For years, and well before the current generative AI boom, journalists have been reporting how advancing technology has simplified the creation of fake videos that mimic real people. That Obama video from 2018 was probably the biggest event to put the issue on the radar.

In the past year or so, creating deepfakes has gone from easy to, you know, really easy. Before, you needed at least a little bit of technical and editing know-how. Now, your own personal deepfake is just a few minutes away with tools like ElevenLabs, Hedra, or HeyGen.

Given that it's an election year, everyone is super nervous. CNN's Jake Tapper was the latest journalist to highlight the issue, going so far as to show a deepfake of himself (made by comedian Danny Polischuk) on live TV. His report goes on to cite how political campaigns have been using deepfakes and, more broadly, AI-generated footage in various ads.

Deepfake election concept

Reports like Tapper's are common, and they often have a certain helplessness to them: After pointing out how easy it is to use AI to create fake footage and deepfakes, the implication is that, with them in the system, you won't know what to trust. There's an implicit call for better standards if not regulation of deepfakes in the hope that it prevents them from poisoning our information ecosystem any more than they already have. But regardless, don't believe everything you see or hear.

Of course, that last sentence is always good advice. And many, if not most, would support a framework that lets consumers of information better understand when they're looking at footage or photos of a real person that are AI-generated or altered.

Still, although deepfakes are generally more common, incidents of people actually being fooled by them are rare. Studies show the broader social impact isn't as corrosive as many assume.

In other words, it seems the information ecosystem is slowly building up resistance to deepfakes. Consider:

  • Standards are emerging: AI image generators include metadata that identifies the file as created by AI. Through standards like the C2PA, that information can be conveyed easily to the user. And major distribution platforms like YouTube require creators to label deepfake content or risk the content being flagged or even expulsion.

  • The metadata factor: Technical know-how isn't required to make deepfakes, but it is if you want to strip out the metadata. Certainly, determined people will find always find a way, but creating a convincing and "clean" deepfake isn't as easy as prompting DALL-E. (More in this in Adobe's news below.)

  • Everyone's deepfake radar is at maximum. Thanks in part to folks like Tapper raising awareness, our collective skepticism about what we're looking at is, for better or worse, higher than ever. Also, journalists are now often trained on how to spot AI imagery (at ONA I got a great session from the folks at Tegna), so it's harder than ever for a deepfake to slip into mainstream coverage.

  • Debunking is swift. If a deepfake begins making the rounds, it's almost always flagged and debunked quickly, often within minutes, enabling distribution platforms to label, downrank, or delete it.

  • Is it satire? Many of the political deepfakes circulating have relied on a satire defense, which may make them protected under the First Amendment. While anyone who has seen SNL will appreciate good satire, the realism of deepfakes makes the issue slightly less straightforward. But if your deepfake at any point declares allegiance to Hydra, it's hard to see it really fooling anyone.

To be clear, I'm talking specifically here about deepfakes circulated on the internet with the intent to deceive — to create "fake news." Situations like the infamous incident where a company executive was duped into transferring $25 million to criminals during a deepfake Zoom call, and nonconsensual pornography are serious problems, and they're related, but also somewhat different animals.

Free digital download: 100+ Top AI Prompts for PR Pros

But for now, the deepfake election has so far turned out to be less of an epic than we might have feared when those Biden robocalls were going around. But there's a reason every deepfake report (including Tapper's) cites those calls – there hasn't been a similar incident that's been widely reported, as Axios recently observed.

I'm not sure if that's progress, but at least we don't have regress. The deepfake problem is real, but our collective skepticism seems to be inoculating us from it being a fatal disease.

[Image credit: Midjourney]

Pete Pachal has been covering technology for more than two decades and has been following the field of artificial intelligence since before Gmail was trying to complete your sentences. Pete was Chief of Staff for Content at CoinDesk where he led the publication’s AI Committee and wrote the company’s guidelines for the use of generative AI. He’s also held senior editorial positions at Red Ventures, Mashable, and NBCUniversal. His work has appeared in Fast Company, Forbes, TIME, and more. Explore Pete's AI training courses for PR & Marketing professionals on Mediacopilot.ai


Topics

News, Blog


Discussion loading

Home | About | Meet the Team | Contact Us
Media Kit | Newsletter Sponsorships | Licensing & Permissions
Accessibility Statement
Terms of Use | Privacy & Cookie Policy

Techlicious participates in affiliate programs, including the Amazon Services LLC Associates Program, which provide a small commission from some, but not all, of the "click-thru to buy" links contained in our articles. These click-thru links are determined after the article has been written, based on price and product availability — the commissions do not impact our choice of recommended product, nor the price you pay. When you use these links, you help support our ongoing editorial mission to provide you with the best product recommendations.

© Techlicious LLC.