Navigate to News section

The Coming Gentrification of YouTube

Moderating YouTube won’t get rid of all the extremist content, just the stuff that doesn’t make money

by
David Auerbach
June 19, 2019
Illustration: Tablet Magazine
Illustration: Tablet Magazine
Illustration: Tablet Magazine
Illustration: Tablet Magazine

Marshall McLuhan was wrong: The moderation is the message. That, at least, is the message one gets from reading The New York Times’ story about the phenomenon of YouTube radicalization. The piece chronicles how YouTube sent a young man down a rabbit hole of increasingly extremist right-wing videos—all the better for YouTube, which kept him clicking, all the worse for society. It ends on the equivocal note that the man has “successfully climbed out of a right-wing YouTube rabbit hole, only to jump into a left-wing YouTube rabbit hole.”

Fears around such radicalization have led many to insist that YouTube should go beyond its current policy of only removing extreme content that crosses hard lines of malicious harassment, hate speech, or child endangerment. Felix Salmon in Axios argues for “principles-based” moderation, which would allow for ad hoc and non-precedential takedowns whenever it decides that a particular video is causing “significant harm.”

In practice, however, granting YouTube such wide latitude to police content would fail to address the underlying problem. No matter how much YouTube cracks down, it will still consider Sean Hannity, Tucker Carlson, and even Steve Bannon to be uncensorable, authoritative sources, because, like it or not, society considers them to be authoritative sources. What many are asking of YouTube amounts to, “Please remove some of your harmful content, but only the unimportant stuff.” But lest we forget, YouTube is a profit maximizing corporation, not an organ of representative democracy or the public trust. In so far as YouTube responds to public concerns about its content, the company will be guided, not by the political conscience of its critics, but rather by a desire to limit liability while protecting its bottom line.

I’ve experienced the unpleasant caprices of YouTube recommendations. (I used to work for Google, but never got anywhere near YouTube or their algorithms.) While watching a thoughtful talk on the limits of machine learning, YouTube automatically queued up “THE ARTIFICIAL INTELLIGENCE AGENDA EXPOSED” by David Icke, the British former professional soccer player turned full time conspiracy theorist infamous for declaring that the Rothschilds were actually members of an alien lizard race that secretly runs the world. Icke describes how an unspecified “THEY” (possibly the lizards, or the Jews, or both) are getting youth addicted to technology so that they can later be connected to artificial intelligence and become AI.

The video I was watching had 2,500 views. Icke’s had 250,000.

Ironically, The New York Times boosted Icke’s profile last year while interviewing author Alice Walker. Walker raved about one of Icke’s books, saying “In Icke’s books there is the whole of existence, on this planet and several others, to think about.” The Times defended the piece by saying that Walker was “worthy of interviewing.”

It’s dismaying to see Icke’s ignorance and bigotry promoted anywhere, and yet an open, liberal society requires that he have the civic right to express himself. The question is, what responsibilities do platforms like YouTube and The New York Times have to limit his exposure?

Unlike the Times, YouTube has a softer method of moderation: it demonetizes videos so their creators can’t profit from them. YouTube will not place ads on videos or pay their creators for videos that don’t meet a considerably higher bar of safe, inoffensive content. Such videos are also penalized in rankings and not recommended. Right-wing loudmouth Steven Crowder, subject of much controversy last week, has been demonetized. That Icke AI video has not, possibly because Icke does not talk about lizards or Jews in it.

Yet even if the Icke video is demonetized, it won’t draw any more attention to the AI video I was originally watching. It’s just not popular enough. And this is why crowd-sourced recommendations are dangerous in general. They tend to draw attention to the popular, the established, and the controversial. Yet asking YouTube to override the collective hive mind is placing social authority in the hands of a for-profit corporation, and when has that ever worked out well?

Like its sibling Google, YouTube is ultimately accountable to Alphabet shareholders. Google was in an analogous situation in 2017 when it changed its ranking of news sources in response to heavy criticism over “fake news” after the 2016 election. As YouTube is promising now, Google News pushed “authoritative” sources to the top of results while demoting all others, right and left. This came as a shock to leftist outlets like AlterNet that had initially praised the move, as they apparently failed to anticipate that “media like AlterNet—dedicated to fighting white supremacy, misogyny, racism, Donald Trump, and fake news—would be clobbered by Google in its clumsy attempt to address hate speech and fake news.” AlterNet, ACLU.org, Truthdig, and Truthout all suffered double-digit losses in Google referrals, while Fox and the Daily Wire got a boost.

As YouTube tightens the monetization belt, I predict a similar process. The monetization bar will continue to be raised, so that YouTube’s recommendation engine becomes closer to Netflix or Amazon: YouTube will primarily promote professional, established content providers, because they are the only ones on which YouTube makes money. Establishment providers will be given far more leeway than upstarts.

Contrariwise, YouTube will discourage demonetized providers without banning them. Many small channels that say exactly the same things as Fox or MSNBC will lose their monetization simply for attracting negative attention or being edgy. YouTube will provide them with few recommendations and little support. Independent content makers were the original lifeblood of YouTube, but their chance of success will now be far lower, because they will have to build an audience with minimal aid from YouTube’s recommendation network.

Much of this content will not be missed; a bit of it will. Most of YouTube is crap, but as Theodore Sturgeon famously observed, 90% of everything is crap. Networks like YouTube, Facebook, and Twitter have reached a point of societal saturation where a sufficient number of influential people wish for that crap not to be shoved in their face. What remains after YouTube’s upcoming cleanup will inevitably be mostly crap, but it will be more established and refined crap. It will be Fox pundits ranting about immigrants and Islam. It will be wild speculation around Trump hiring prostitutes to urinate on Russian beds. It will be David Icke talking about the AI takeover instead of lizards.

Will that be an improvement? Some people seem to think so. For my part, I recommend that you ignore recommendations altogether, and instead chart your own path through our collective landscape of gem-flecked rubbish.

***

Like this article? Sign up for our Daily Digest to get Tablet magazine’s new content in your inbox each morning.

David Auerbach is the author of Bitwise: A Life in Code (Pantheon). He is a writer and software engineer who has worked for Google and Microsoft. His writing has ap­peared in The Times Literary Supplement, MIT Technology Review, The Nation, Slate, The Daily Beast, n+1, and Bookforum, among many other publications. He has lectured around the world on technology, literature, philosophy, and stupidity. He lives in New York City.