Can Google’s Anti-Deepfake Trials Prevent The New Disinformation Age?

As researchers begin study trials on countering Deepfake technology — the software that manipulates footage into the likeness of others — experts are arguing whether the fight is already a lost cause. It turns out the fate of the information age is resting in the hands of companies like Google, the big tech industry’s monopolistic serial abuser, now conducting their own experiments.

In a recent report from The New York Times, it was revealed that Google’s top scientists hired over a dozen actors to simply sit at tables, walk down hallways and cross streets while talking into a video camera. Through the use of artificial intelligence and photo manipulation, their faces were seamlessly swapped with hundreds of male and female subjects, using the altered videos to better detect signs of future Deepfake manipulations. According to sources for the Times, the company’s current fear is this technology could be used to sway the 2020 presidential election and other democratic races in the future.

There is a valid concern of having institutions like Google — under no obligation to play the online fact-checker — use their vast power to order, construct and dismantle their own “perfect” Deepfakes. As the AI tools become streamlined paths towards digital forgery, whether we’re talking innocent face-swap video apps like Zao to the more pervasive fake porn industry of Mr. DeepFakes, there’s a scaled difference between having individuals spreading false smut versus our highest governments and corporations. “And you can already see a material effect that Deepfakes have had,” argues Nick Dufour, a Google engineer overseeing the company’s research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

While such Y2K rhetoric may seem hyperbolic, this kind of misinformation economy has already sparked real-world examples. In August, creators of the website NotJordanPeterson.com were able to make a high-functioning text to speech software simulating the voice of Dr. Jordan Peterson, one of the most controversial academics and political pundits, where anyone on the internet could make a simple audio clip of the man reading hilarious feminist literature, vulgar rap videos, communist revolutionary talking points, and nonsensical meta-analysis on “the art of sucking dick”, showcasing the potential for dangerously accurate content made from nothing.

“Something very strange and disturbing happened to me this week,” Peterson wrote on his website at the time, suggesting a copyright threat was in order against the site’s owners. “If it was just relevant to me, it wouldn’t be that important (except perhaps to me), and I wouldn’t be writing this column about it. But it’s something that is likely more important and more ominous than we can even imagine. Wake up. The sanctity of your voice, and your image, is at serious risk. It’s hard to imagine a more serious challenge to the sense of shared, reliable reality that keeps us linked together in relative peace. The Deep Fake artists need to be stopped, using whatever legal means are necessary, as soon as possible.”

In a larger political context, such tools can be just as effective for selling the public on doctored propaganda as much as creative comedy. Peterson, the Intellectual Dark Web’s so-called “free speech warrior”, is at least unique in acknowledging a free misinformation economy can make the public slaves to their own falsehoods (even if it's all hypocritical to his own silly principles). As a journalist, audio and video evidence is a valuable resource in verifying someone’s record. If a simple Deepfake software can generate a random person’s statement on the fly, refuting debated accusations of misogyny, censorship or their beliefs in a contextual vacuum, those running for political office have a vested interest in manufacturing truth, especially divisive candidates like Bernie Sanders, Elizabeth Warren or President Donald Trump.

The Times report cites several other cases where “video evidence” coming out of countries like proto-fascist Brazil and Africa’s Gabon to communist China is simply unverifiable by current media standards. “The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video”, the report states. “Opponents claimed it had been faked. Experts call that confusion ‘the liar’s dividend’, already challenging our assumptions about what is real and what is not” in the modern context. It’s not unreasonable to see how the United States and their largest corporations can fight their battles through manufactured truth, turning the common problem of “he said-she said” into the audio-visual realm. For experts in the field, it seems Pandora has already escaped the box.

According to a statement from Google, “any academic or corporate researcher could [already] download its collection of synthetic videos and use them to build tools for identifying Deepfakes,” allowing the online world “essentially a syllabus of digital trickery” to either further or deconstruct the tech’s potential viability. Other researchers, such as the engineers for Canada’s own Dessa, claim their own Deepfake detector could identify Google’s faked videos with “almost perfect accuracy”, noting other tests regarding Deepfake videos across the internet “failed more than 40 percent of the time”. If a monopoly force cannot compete with independent forgers, we’re far behind from truly reliable Deepfake debunkings.

There’s a reason why candidates like Sen. Marco Rubio (R-FL) have taken to calling Deepfakes the “modern equivalent of nuclear weapons”. Our most valuable resources are being brought into question at a rate beyond our very own understanding, let alone regulatory ability. It was reported in VICE News that Deepfake critics like Peterson and Rubio have expressed sympathies for proposals like the DEEPFAKES Accountability Act. The legislation, however, isn’t clear on its solutions without harming the First Amendment and Big Tech’s Section 230 protections.

In a report from the Electronic Frontier Foundation, the bill states “there is an exception for parodies, satires, and entertainment — so long as a reasonable person would not mistake the ‘falsified material activity’ as authentic” yet it “doesn’t specify who has the burden of proof” if such a case was taken to court, “which could lead to a chilling effect for creators.” As I wrote at the time, this could result in those making the wrong joke or pushing ill-considered satire into criminals for “falsifying the public record”, regardless of whether it’s through AI-generated or edited CGI videos. The lack of clear definitions can make censorship and deception all the more predatory.

This was expanded upon by Reclaim The Net, noting how a famous video showed Rep. Nancy Pelosi slurring her words was falsely condemned as a Deepfake. Is mocking a politician a crime if it’s by AI rather than editing? Is it worth the proposed fines upwards of $150,000 plus potential criminal penalties? Why do these laws also provide two-tier justice exemptions for officers and employees of the US government? And how can we expect to track down these tricksters without a huge expansion of the surveillance state? Until these advancements are made, perhaps swallowing the black pill is the only current course of action. This is not just the insight of a humble newsman seeing a new information cold war, but the very people who created this problem in the first place.

“I realized you could now basically create anything, even things that don’t even exist,” argued Hao Li, one of the world’s most prolific Deepfake artists in a Technology Review interview. “Even I can’t tell which ones are fake. We’re sitting in front of a problem [since] videos are just pixels with a certain color value. We are witnessing an arms race between digital manipulations and the ability to detect those with advancements of AI-based algorithms catalyzing both sides. When that point comes, we need to be aware that not every video we see is true… Soon, it’s going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions.” Now it’s just a matter of whether these solutions can keep up and work before weapons are ready to fire.

Thank you for reading. This article was published for TrigTent, a bipartisan media platform for political and social commentary. Bailey Steen is a journalist, editor, and designer from Australia. You can read their work on Medium and previous publications such as Janks Reviews and Newslogue.

For updates, feel free to follow Bailey through Facebook, Twitter, Instagram, YouTube, and other social media sites. You can also contact through bsteen85@gmail.com for personal or business reasons. Stay honest and radical. Cheers, darlings. 💋

troubled writer, depressed slug, bisexual simp, neoliberal socialist, trotskyist-bidenist, “corn-pop was a good dude, actually,” bio in pronouns: (any/all)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store