Deepfakes: How Digital Manipulation Destroys The Online Information Age

When it comes to greater transparency, the internet both giveth and taketh away. The rise of DeepFake, the combination of “deep learning” and “fake” describing software that manipulates content into the likeness of others, shows the information age faces a serious monster beyond the control of its own creators, leaving objectified subjects trying to put the genie back in the bottle.

It was earlier this week that NotJordanPeterson.com, a new text to speech website simulating the voice the controversial teacher, was pressured into shutting down after the shock, awe and suggestive legal threats from the real Dr. Peterson. Given the surreal accuracy of its results, which involved hilarious pantomiming of feminist literature, vulgar rap videos, communist revolutionary talking points, and nonsensical meta-analysis on “the art of sucking dick” that was only a few clicks away, the doctor does raise legitimate criticisms of how DeepFakes could harm our “information ecosphere”.

“Something very strange and disturbing happened to me this week,” Peterson wrote on his website. “If it was just relevant to me, it wouldn’t be that important (except perhaps to me), and I wouldn’t be writing this column about it. But it’s something that is likely more important and more ominous than we can even imagine. Wake up. The sanctity of your voice, and your image, is at serious risk. It’s hard to imagine a more serious challenge to the sense of shared, reliable reality that keeps us linked together in relative peace. The Deep Fake artists need to be stopped, using whatever legal means are necessary, as soon as possible.”

It should be noted this message, while fundamentally necessary, happens to come from a known deception artist. Peterson, notorious for framing himself as the “intellectual dark web” hero for freedom of speech, has cultivated his own hypocritical reputation for filing false lawsuits against his own critics, repeatedly doxing students who exercised in protests, organizing an elusive fellowship program where criticism is suppressed by his own social media moderators and the proposing of the now-scrapped McCarthy-esque hit-list on colleges with alleged neo-Marxist content.

This bad history from a bad actor like Peterson makes him legitimately easy to dismiss… and makes the potential illegitimate content against him all the more dangerous for a transparent discourse. While the 1,300 words blog post suggests more grift-based lawsuits are in order, condemning DeepFake as a threat to both his own personal reputation and preferred socio-political gains against “social justice warriors” and “postmodern neo-Marxists”, it also does raise necessary questions over how this technology harms individual privacy, bodily property rights, the de-verifying of online audio-video resources and how this affects due process in the court of public opinion.

Peterson is one of the few celebrity subjects speaking out about this invasive issue, even if it’s just for his own self-interest rather than the greater good of digital transparency. Nevertheless, some credit is due despite the counter-concerns on free expression grounds. Thus far, DeepFake technology has only ever made a name for itself as a niche for the memes, cinema, and pornography markets, but to deny the potential for political propaganda loses sight of how easy conspiracy, fraudulence, and gotcha culture can spread. Give bad actors the tools and defamation is now a readily available product at the click of the keyboard.

Not to say it’s an either/or case of malicious deception or artistic liberty. There are programs like Faux Rogan, a new Joe Rogan audio-video DeepFake where people can vote on what’s either real or fake, channels like Ctrl Shift Face, the YouTube page seamlessly changing subjects’ faces in real-time, sites like Mr. DeepFakes, the pornographic database for fake celebrities projected onto real porn stars, while this entire brand of technology was made famous in a BuzzFeed video projecting a fake PSA from actor Jordan Peele onto former President Barack Obama. All of which have a case for free expression at the expense of the digital image of others.

While we’re free to judge the ethics however you see fit, it’s when dealing with a deceptive monster that doesn’t care about such ethos that the discussion requires serious consideration of definitions, principles, and political policy (in that order). According to VICE News, Peterson has expressed sympathies for bringing about new laws like Rep. Yvette Clark (D-NY) and her proposed DEEPFAKES Accountability Act, though such solutions raise the alarms as to how due process, the First Amendment and Big Tech’s Section 230 protections will adapt to these new technologies.

In a report from the Electronic Frontier Foundation, the bill states “there is an exception for parodies, satires, and entertainment — so long as a reasonable person would not mistake the ‘falsified material activity’ as authentic” yet it “doesn’t specify who has the burden of proof” if such a case was taken to court, “which could lead to a chilling effect for creators.” Make the wrong joke, an ill-considered satire or change the course of politics by falsifying the public record, whether genuine AI-generated or edited CGI videos, the fear of unwanted censorship begins to fester and the value of deception becomes all the more pervasive.

We have to ask if these definitions are limited to genuine AI DeepFakes? As noted by Reclaim The Net, a famous video which falsely showed Rep. Nancy Pelosi slurring her words was falsely condemned as a DeepFake where people urged for removal across social media despite only being user-generated fakery. Does requiring “mandatory labeling, watermarking, or audio disclosures” extend to all mediums? Are simply offenses worth fines upwards of $150,000 plus potential criminal penalties? What constitutes a difference in the eyes of the law? Why are there further upper-class exemptions for officers and employees of the US government? How do they determine a video is actually DeepFaked? Were these questions simply overlooked by Jordan “whatever legal means are necessary” Peterson? Or has he stepped outside his bounds on an issue bigger than himself?

These are similar criticisms from that of Hao Li, one of the world’s most prolific DeepFake artists, questioning the same “off-the-shelf deception” he helped create. “I realized you could now basically create anything, even things that don’t even exist,” Li said in a Technology Review interview. “Even I can’t tell which ones are fake. We’re sitting in front of a problem [since] videos are just pixels with a certain color value. We are witnessing an arms race between digital manipulations and the ability to detect those with advancements of AI-based algorithms catalyzing both sides. When that point comes, we need to be aware that not every video we see is true.”

To put it simply, even the creators of such face-swap technology are losing sight of their own creations. How can we expect average people to keep a cool head in this marketplace of intentional misinformation? Through blurring the line between fantasy and reality, utilising generative adversarial networks (GANs) and facial recognition tracking from that of age-old photography, DeepFake is able to freely harvest, “learn” and compose data in such a convincing way that it amounts to automated bodily forgery, whether it’s just for memes or a means to a socio-political end.

Thanks for reading! This article was originally published for TrigTent.com, a bipartisan media platform for political and social commentary, truly diverse viewpoints and facts that don’t kowtow to political correctness.

Bailey Steen is a journalist, graphic designer and film critic residing in the heart of Australia. You can also find his work right here on Medium and publications such as Janks Reviews.

For updates, feel free to follow @atheist_cvnt on his various social media pages on Facebook, Twitter, Instagram or Gab. You can also contact through bsteen85@gmail.com for personal or business reasons.

Stay honest and radical. Cheers, darlings. 💋

troubled writer, depressed slug, bisexual simp, neoliberal socialist, trotskyist-bidenist, “corn-pop was a good dude, actually,” bio in pronouns: (any/all)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store