Google Blame YouTube’s Adpocalypse Fallout Over Recent $70 Billion Loss

Following countless scandals surrounding the data privacy and radical content, YouTube is now taking the blame from executives who believe the platform went woke and went broke. On Tuesday, Google’s parent company Alphabet published its first-quarter earnings which revealed ad revenue failed to meet their yearly expectations, leaving the big tech monopoly with an incredible $70 billion in stock being pinned on YouTube’s failed administration.

In recent comments from Ruth Porat, the CFO of Alphabet and Google, this year’s poor ad revenue growth stems from failed management on the part of YouTube following Adpocalypse. “While YouTube clicks continue to grow at a substantial pace in the first quarter, the rate of YouTube click growth rate decelerated versus a strong Q1 last year, reflecting changes that we made in early 2018, which we believe are overall additive to the user and advertiser experience,” Porat said in the company’s earnings call, according to CNBC. In non-bureaucratic English, the growth in ad revenue and user engagement have decreased drastically since the previous year.

Although policy specifics for why this happened are being held close to the chest, a quick sift through YouTube’s record shows it’s not a giant mystery. In the wake of Adpocalypse, where sponsors began stripping ads from the platform once found in connection to some incredibly small far-right channels, the company pledged to purge the platform of harmful conduct surrounding conspiracy theories, “fake news” and hate speech, which grew a divisive wedge amongst the general public on whether it was enabling radicalisation for the sake of easy revenue or whether it was a censorious tyrant gatekeeping against the people’s online space— which is a lose-lose perceptually whichever way YouTube cut it.

These harmful perceptions led to a select handful of reforms across algorithmic, administrative and monetisation practice, which in the interest of courting everybody seemingly pleased nobody. “There’s a misconception that YouTube makes money off of recommending ‘radical’ content,” a YouTube spokesperson told CNBC, “but the truth is that very little of this content makes any kind of meaningful money. In fact, when we cleaned up our partner program to remove bad actors last year, we made it clear that 99% of those impacted creators were making less than $100 a year.”

While this narrative could be true, it does neglect the market power behind that 1% impact. The marketplace of hate could never grant fortunes to all of its grifters, though substantial outlets such as InfoWars were doing the industry’s heavy lifting from their growing of upwards of 1.3 million subscribers to their sustaining of thousands of monthly active users. There are damn good reasons for policy enforcement against such outlets, from host Alex Jones often slandering and inciting violence against political actors to the harassment pushed against the victims of the Sandy Hook massacre, the platform has remained unaccountable as to how it chooses its impacted targets and why — which no doubt stows concern among a public wishing for change even when justice can be served in an unjust manner.

The same unaccountability applies to their admitted slant towards “authoritative news sources” and streamlined fact-checking, which include mainstream outlets such as MSNBC and CNN who are subsidised by YouTube by the millions, which are arbitrarily granted a higher status via promotion across YouTube’s user homepages and search results despite lower user engagement than genuine content creators in the same fields. This is detailed in a stunning report from Bloomberg showcasing how employees saw the rise among conspiracy grifters, abusing content policies under the implied consent of YouTube, which nosedived into policy punishments after the Adpocalypse backlash.

By contrast, YouTube also started curbing engagement from their own creators, evidenced by January 2018’s monetisation changes dictating the YouTube Partnership Program (YPP) requires users to have logged over 4000 hours in audience viewership and over 1000 subscribers. This disincentivized smaller channels from attempting to use YouTube as a core revenue source until somehow gaining popularity, which is only a benefit to larger corporate ventures who already have the money and power, which leaves the impression YouTube doesn’t really stick by its pro-little guy namesake.

Since these reforms, YouTube has earned its own reputation for both censoring millions of videos on varying grounds, providing users with no independent transparent appeals process to provide counterpoints and evidence among an ethical judge, while enabling pedophile communities to continue to fester across the platform. Selective enforcement, whether intentional or not, remains a main point of contention among users dissatisfied with the platform.

“Once you are in this [comment section] loophole, there is nothing but more videos of little girls,” explained Matt Watson, a YouTuber who exposed a pedophile culture enabled by the algorithm. “How has YouTube not seen this? How is it that there are people who are genuinely good, where every algorithm under the sun detects when they swear more than two times or make videos about panic attacks and depression, yet this shit is going on? It’s been in the public consciousness for over two years and nothing is being done. It’s disgusting.”

This leaves YouTube in a difficult spot. It’s profitable to stick to the natural state of the platform, where creators have unlimited creative freedom, yet it enables the worst of mankind to abuse the system if left unchecked. There’s a moral argument to curb these extremist excesses, placing safeguards on the marketplace, yet it continues to impact unintended targets when placed in the firing line of unaccountable enforcement. While there’s truth behind the idea YouTube went woke and went broke through failed policies bending to the unpopular mainstream, the justifications can’t be easily dismissed. It’s just the ethics of modern video sharing require more than simple admins, algorithms and a focus more on PR than the values of free speech vs secure society.

“When you’re dealing with a platform that generates 300-plus hours of video per minute, the realities of being able to check and verify the content become daunting,” said one of YouTube’s former advertiser source, according to Adweek. “For anyone saying human vetting is the only way to go, be prepared for a vastly reduced YouTube with ‘waiting times’ and liberal arguments of censorship and free speech. If people are asking for machine learning to solve their problems, be prepared for issues like this to keep appearing. No amount of investment in people or technology will solve these issues for YouTube; it’s ingrained in the very DNA of the platform.”

Thanks for reading! This article was originally published for TrigTent.com, a bipartisan media platform for political and social commentary, truly diverse viewpoints and facts that don’t kowtow to political correctness.

Bailey Steen is a journalist, graphic designer and film critic residing in the heart of Australia. You can also find his work right here on Medium and publications such as Janks Reviews.

For updates, feel free to follow @atheist_cvnt on his various social media pages on Facebook, Twitter, Instagram or Gab. You can also contact through bsteen85@gmail.com for personal or business reasons.

Stay honest and radical. Cheers, darlings. 💋

troubled writer, depressed slug, bisexual simp, neoliberal socialist, trotskyist-bidenist, “corn-pop was a good dude, actually,” bio in pronouns: (any/all)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store