Google’s Real Secret Hate Speech Police

Image for post
Image for post
(this article was originally published via TrigTent)

BAILEY T. STEEN | SATURDAY, FEBRUARY 3, 2018

Image for post
Image for post

Google, the search engine monopoly always watching over us, now has more than 100 government agencies and private sector organisations assisting in the regulation of content posted on their site YouTube. Intended to take down extremist content, such as Jihadist material and child pornography to the likes of #ElsaGate, the Trusted Flaggers program, started in 2012, is now giving political capital to crusaders against the perpetually ill-defined “hate speech.”

The Daily Caller, a staunchly right-wing publication, reports that numerous confidentially agreements were signed by the company, prohibiting Google, YouTube’s parent company from disclosing any and all behaviour between these 100 agencies and organisations from being released to the public, according to a Google representative who spoke with them this Thursday.

Organisations that have gone public with their involvement in both problematic content policing and counter-extremist monitoring are the Anti-Defamation League and a European anti-intolerance movement known only as No Hate Speech.

The BBC, in August of last year, noted that these Trusted Flaggers aren’t just company employees, but also unspecified law enforcement agencies and child protection charities. These organisations, however, some of which unnamed, have not been public with their connection and influence on the site, swearing Google to secrecy under the guise of ink, pen and mutually agreed confidentiality.

YouTube’s “Trusted Flaggers” program isn’t the only measure to regulate internet content from extremism to mere words. Our mid-December report on TrigTent detailed how Susan Wojcicki, the chief executive of YouTube, wanted to hire well over 10,000 workers to process the removal of loosely defined extremist content that “endangers children.”

YouTube public policy director Juniper Downs told a Senate committee on Wednesday that 50 of the 113 Trusted Flaggers program members joined in 2017, during the Adpocalypse period on the site when YouTube’s advertisers placed pressure on the company to police content or risk significant funds and campaigns being withdrawn from their platform.

Downs’ account to the Senate committee — describing how Trusted Flaggers are equipped with digital tools allowing them to mass flag content for review by YouTube personnel — sounds awfully similar to a rejected YouTube policy (and now a dead meme) known as “YouTube Heroes”.

The same principle of mass flagging applied, sending content to be reviewed, only these tools were possessed by everyone on the monopolised site who may have unfaithful motives. The program, after immediate criticism, was never put in place.

Critics argued from a “who will watch the watchmen” point of viewing, questioning the mob mentality that could devolve from a community based system of majority censorship. The Trusted Flaggers, by contrast, offer a new private aristocracy, where the few are the end all voice of what is considered oh-so-problematic. One of those figures, laid out in The Daily Caller article, was University of Toronto professor Jordan B. Peterson.

A fierce critic of political correctness and seemingly authoritarian law, rising to prominence because of his opposition to the potential consequence of hate speech code in Canada’s Bill C-16, the clinical psychologist ended up having one of his own videos blocked in across 28 countries in early January.

The video — a segment from the H3H3 podcast where Peterson, as a guest, outright rejected white supremacy — was falsely flagged by a legal entity for the ill defined “hate speech”, with the explanation sent to Peterson personally reading that he incited “terrorism” through his own character defence.

“Here’s some more ‘explanation’ for the censorship,” he tweeted. “Incitement of hatred, terrorist recruitment, incitement of violence, celebration of terrorism. Even to fall briefly and erroneously into such a category is a chilling event….”

When the removal was questioned by Ethan Klein, host of that same H3H3 podcast in question, the company sent a tweet as though it was a mistake, not the formal legal complaint it explained to Peterson personally.

When Peterson asked for more clarification, the company did not respond. Leaving us in the dark on what will be considered the removable content of the week: a Jihadi recruitment or a refutation of extremist world views? Arguments for white ethno-states or counter arguments for a liberal society?

Just searching for key words, the typical way things are conducted by YouTube’s admin algorithms, is not a tenable solution. And neither is giving the reins to an illiberal aristocracy with political goals of their own.

Both of these result in outright removal or placement in the “restricted” mode, which essentially filters your content our of the reach of children and users who are not signed into an account (the vast majority of users), leading to an eventual demonetisation which cuts your financial viability on the platform.

In her testimony before the Senate committee, Downs addressed the concerns of combating “offensive” or “inflammatory” content that falls very far from the tree of violent extremism, and more on the side of disagreeable political incorrectness and vulgar humour.

“Some borderline videos, such as those containing inflammatory religious or supremacist content without a direct call to violence or a primary purpose of inciting hatred, may not cross these lines for removal. But we understand that these videos may be offensive to many and have developed a new treatment for them,” she said.

“Identified borderline content will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes. Initial uses have been positive and have shown a substantial reduction in watch time of those videos.”

Leaving the likes of Peterson, without a platform to address smears, in an isolated zone of either poor artificial intelligence or the oh so poor proletariat souls who watch over us.

Image for post
Image for post

Thanks for reading!

Bailey T. Steen is a journalist, editor, artist and film critic based in Victoria, Australia, but is also Putin’s Puppeton occasion.

Articles published on Trigtent | Janks Reviews | Medium | Steemit

Updates and contact: @atheist_cvnt on Twitter | Instagram | Gab.Ai

Business or personal contact: bsteen85@gmail.com | Comment below

Cheers, darlings!! x

Written by

troubled writer, depressed slug, bisexual simp, neoliberal socialist, trotskyist-bidenist, “corn-pop was a good dude, actually,” bio in pronouns: (any/all)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store