Navigate to News section

Google’s New Hate Speech Algorithm Has a Problem With Jews

And that’s probably because it reads The New York Times and the Guardian

by
Liel Leibovitz
July 28, 2017
Shutterstock
Shutterstock
Shutterstock
Shutterstock

Don’t you just hate how vile some people are on the Internet? How easy it’s become to say horrible and hurtful things about other groups and individuals? How this tool that was supposed to spread knowledge, amity, and good cheer is being use to promulgate hate? No need to worry anymore: Google’s on it.

Earlier this year, Silicon Valley’s overlords introduced Perspective API, the latter being nerd-speak for Application Program Interface, or a set of tools for building software. The idea behind it is simple: because it’s impossible for an online publisher to manually monitor all the comments left on its website, Perspective will use advanced machine learning to help moderators track down comments that are likely to be “toxic.” Here’s how the company describes it: “The API uses machine learning models to score the perceived impact a comment might have on a conversation.”

That’s a strange sentiment. How do you measure the perceived impact of a conversation? And how can you tell if a conversation is good or bad? The answers, in Perspective’s case, are simple: machine learning works by giving computers access to vast databases, and letting them figure out the likely patterns. If a machine read all the cookbooks published in the English language in the last 100 years, say, it would be able to tell us interesting things about how we cook, like the peculiar fact that when we serve rice we’re very likely to serve beans as well. What can machines tell us about the way we converse and about what we may find offensive? That, of course, depends on what databases you let the machines learn. In Google’s case, the machines learned the comments sections of The New York Times, the Economist, and the Guardian.

What did the machines learn? Only one way to find out. I asked Perspective to rate the following sentiment: “Jews control the banks and the media.” This old chestnut, Perspective reported, had a 10 percent chance of being perceived as toxic.

Maybe Perspective was just relaxed about sweeping generalizations that have been used to stain entire ethnic and religious groups, I thought. Maybe the nuance of harmful stereotypes was lost on Google’s algorithms. I tried again, this time with another group of people, typing “Many terrorists are radical Islamists.” The comment, Perspective informed me, was 92 percent likely to be seen as toxic.

What about straightforward statements of facts? I reached for the news, which, sadly, has been very grim lately, and wrote: “Three Israelis were murdered last night by a knife-wielding Palestinian terrorist who yelled ‘Allah hu Akbar.’” This, too, was 92 percent likely to be seen as toxic.

You, too, can go online and have your fun, but the results shouldn’t surprise you. The machines learn from what they read, and when what they read are the Guardian and the Times, they’re going to inherit the inherent biases of these publications as well. Like most people who read the Paper of Record, the machine, too, has come to believe that statements about Jews being slaughtered are controversial, that addressing radical Islamism is verboten, and that casual anti-Semitism is utterly forgivable. The very term itself, toxicity, should’ve been enough of a giveaway: the only groups that talk about toxicity—see under: toxic masculinity—are those on the regressive left who creepily apply the metaphors of physical harm to censor speech not celebrate or promote it. No words are toxic, but the idea that we now have an algorithm replicating, amplifying, and automatizing the bigotry of the anti-Jewish left may very well be.

Liel Leibovitz is editor-at-large for Tablet Magazine and a host of its weekly culture podcast Unorthodox and daily Talmud podcast Take One. He is the editor of Zionism: The Tablet Guide.