FB Pixel no scriptThe Bullet: Deepfakes — We May Never Need to Perform Again | KrASIA

The Bullet: Deepfakes — We May Never Need to Perform Again

Written by Degen Hill Published on   3 mins read

If we’re viewing content that both is and isn’t real, does that change how we view or value it?

Last week, I listened to a podcast featuring comedian Neal Brennan. He was talking about his latest standup special “Blocks” and mentioned that in it, he had done something he never thought he would do — he Deepfaked a small part of it. (Podcast discussion on Deepfakes starts at 1:38:00).

Brennan explained that during the taping of his special for Netflix, he had forgotten to say a line. Feeling that it was necessary to include it, he added that he wanted viewers to see him say the line rather than doing a voice-over while the camera panned to the crowd. With that in mind, there was only one solution — Deepfake himself onstage saying the line he had forgotten. In fact, Brennan admits that throughout the special, there are three Deepfakes of himself on stage.

I watched the special before listening to the podcast and had no idea that, at multiple points, I wasn’t actually looking at the “real” Neal Brennan. For those unaware, Deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content. You’ve likely seen someone else’s face plastered onto a character from a movie saying something funny. And for the most part, that’s what Deepfakes have been used for — memes that make us laugh.

But as Deepfake technology advances (and gets cheaper), what’s to stop TV shows, movies, standup comedians, pornography, or any other visual content creator from ever needing to act or perform again? Theoretically, a script or a performance could be written, stand-ins could do the acting or performing, and the creators’ or actors’ faces could be Deepfaked without us ever realizing that they weren’t actually performing.

My first thought was that Deepfakes are a more advanced form of Automated Dialog Replacement (ADR), also referred to as “looping.” For a film, ADR is the process of re-recording audio in a more controlled and quieter setting. We’ve all seen it before — the over-the-shoulder shot or a wide shot with audio mixed in that we know the actor isn’t saying during the live recording. Deepfake is that, but also includes the visual element — creating the face of the actor saying that line.

At this point, it would be an interesting (and costly) experiment to do a fully Deepfaked film, and viewers would go into it fully aware the technology is being used. We would watch it for the novelty. But in the near future, perhaps Brad Pitt isn’t available for a movie, but the studio wants him, so he’s paid for his likeness and every scene with him is Deepfaked. Does knowing that change how we view the performance? Is it any different from knowing a background scene was conceived on a green screen?

Deepfakes are already here, and they’ll inevitably continue to be used in more mainstream content. From traditional entertainment to YouTube videos, Deepfake technology has evolved to be increasingly convincing and prevalent in the content we consume. Should we view the technology as a non-authentic disruption to our media, or simply the next evolution of content?

All opinions expressed in this piece are the writer’s own and do not represent the views of KrASIA. Questions, concerns, or fun facts can be sent to [email protected]


Auto loading next article...