Navigate to News section

How YouTube Can Fix Its White Nationalism and Anti-Semitism Problem

The Google-run video giant is losing advertisers due to its inability to police its own content. Here’s how it can turn things around.

by
Yair Rosenberg
March 31, 2017
Lluis Gene/AFP/Getty Images
Two people visit the Google stand on the second day of the Mobile World Congress in Barcelona, Spain, February 28, 2017. Lluis Gene/AFP/Getty Images
Lluis Gene/AFP/Getty Images
Two people visit the Google stand on the second day of the Mobile World Congress in Barcelona, Spain, February 28, 2017. Lluis Gene/AFP/Getty Images

This has not been a good month for YouTube. In recent weeks, the Google-owned video giant has been hit with a string of bad press, as prominent brands like AT&T, Johnson & Johnson, J.P. Morgan, and Lyft have pulled the plug on their advertising with the service. The reason? The site’s automated system for ad placement has repeatedly and embarrassingly resulted in products being showcased alongside videos promoting racism, terrorism, and hate speech. The result? Google’s shares have fallen significantly due to an advertiser exodus which has also included Toyota, Heinz, Volkswagen, and Verizon.

It did not have to be this way, but YouTube has long allowed this problem to fester. On March 2, I warned about the fundamental flaws in YouTube’s content moderation system. As currently constituted, the service does not police videos itself, but instead relies on users to flag offensive content. This system is entirely inadequate to its task. Bigotry typically targets minorities, who are by definition fewer in number. As such, there are simply fewer users around to flag offensive content directed at minorities, yet that is precisely what YouTube requires for it to take action.

Moreover, as I noted, this system is ripe for abuse by malicious users, who can and have successfully targeted journalists and researchers reporting about hate speech for take-downs. Thus, Hamas videos promoting brutal violence against Jewish civilians remain on the site, while Tablet’s translation of one such video to raise awareness of its bigotry was removed.

There is another way. If YouTube wants to finally get serious about hate speech on its platform, if only for financial reasons, here are a few suggestions to get the ball rolling.

Bring in experts. Right now, YouTube’s content moderation is literally amateur hour, relying on random users and untrained technicians to filter content. Naturally, as noted above, they make mistakes. A lack of cultural competency in bigoted discourse leads to the failure to identify toxic content and a tendency to misidentify harmless content. Experts can bridge this gap, whether in the form of individuals or organizations like the Anti-Defamation League. Some of this consultation is slowly beginning to take place behind the scenes, but it needs to be formalized into the process.

Compile keywords. One of the easiest ways to see just how poorly YouTube polices itself is to search for bigots’ favorite buzzwords. A search for “goyim” (the Hebrew/Yiddish word for gentiles) immediately turns up tons of anti-Semitic content. A search for “muzzies” likewise reveals an avalanche of anti-Muslim material. Any expert in the fields of anti-Semitism or Islamophobia could have told YouTube to watch out for such terms, thus making content management much easier. If the service wants to finally tackle the problem of abuse on its platform, then, it should work with subject area experts to compile lists of suspect keywords, to streamline the flagging and moderation process.

Priority flaggers. Users who have consistently shown good judgment in identifying bigoted content should go to the front of the line when YouTube’s technicians are sifting reports of problematic videos. This will speed up the process and make it more efficient with fewer false positives. Potentially, the company could also hire individuals tasked with identifying such content, with different divisions specializing in different types of bigotry.

A caveat: There is a compelling case to be made that YouTube should not be in the business of policing its content at all, as empowering a corporation to decide what is acceptable online discourse sets a worrisome precedent. But given that YouTube has already been doing this for years, and given that its business model is clearly dependent on its ability to deliver on that commitment, these suggestions should hopefully make the process more honest, transparent, and less prone to abuse.

Yair Rosenberg is a senior writer at Tablet. Subscribe to his newsletter, listen to his music, and follow him on Twitter and Facebook.