When journalist Julia Ioffe was writing a profile of Melania Trump for the May 2016 issue of GQ, she probably didn’t expect the virulently anti-Semitic and misogynistic social-media response that would come in its wake. One tweet called Ioffe a “filthy Russian kike,” while others sent her photos of concentration camps with captions like “Back to the Ovens!”
On May 19, New York Times editor Jonathan Weisman tweeted about Jewish casino magnate Sheldon Adelson’s support for Trump and the anti-Semitic response to Ioffe’s article. The reaction was immediate. He received images of ovens, of himself wearing the yellow star, and of Auschwitz’s infamous entry gates, the path painted over with the Trump logo, the iron letters refashioned to read “Machen Amerika Great.”
The deluge of anti-Semitic vitriol that has been unleashed this election season has awakened many to a simple fact—the internet has a hate problem.
The ADL just released a study of online harassment of Jewish journalists that gave us a clearer idea of where cyberhate is coming from and who is being targeted, but there is much that is still not known. What we have not yet studied is how harassment is experienced by its targets. What are the lasting effects on them as individuals, and what are the effects of cyber-harassment on the discourse as a whole? What concrete steps can we take that protect individuals from harm and at the same time safeguard the free speech that remains so central to a well-functioning democracy?
Cyber-harassment—which has been defined as any online activity that involves threats of physical or sexual violence, invasion of privacy, material falsehoods designed to harm reputations, calls for mob-like attacks on individuals, and technological attacks—presents a unique problem in that attackers are often able to target victims anonymously, with little fear of retribution. The harms are worsened by the indefinite life of internet postings.
As we saw in our recent study, a handful of bad actors can deliver an outsize barrage. Internet harassers are constantly developing new techniques for targeting their victims. Attackers have inundated victims’ Twitter feeds with hate symbols and doctored photographs of them being shot, gassed, and hanged. Others have exposed victims’ personal communications and private information.
And still other harassers bombard their targets with graphic descriptions of physical or sexual violence. But whatever the technique, the underlying aim is the same: scaring the target into silence.
What happens to your worldview when you are targeted en masse by groups of largely anonymous strangers? How do things change when online activity bleeds into real life? How disorienting is it to be hated by people whose only knowledge of you is your gender, your race, your religion, or your sexual orientation?
As someone who was herself the target of widely publicized harassment 10 years ago, I spent much time thinking about this problem before I decided to fight back. My harassers targeted the characteristics inherent to who I am—my gender, race, and religion—and accompanied that with insults about my character, intelligence, and profession. The most frightening parts were the threats to rape and kill me, especially when accompanied by information about where I studied and lived.
Words alone do not convey the burden that a victim can feel.
Cyber-harassment is more than mere “trolling.” Online space provides factional groups with a disproportionate amount of power and visibility. A small group of misogynistic or racist individuals—or even a single person with a grudge and a WiFi connection—can have an outsize impact.
The ease with which someone can act anonymously online offers an opportunity for those who find it amusing to “pile on.” Bystanders may be offended by what is happening, but often will not feel like they can intervene, sometimes because they think it is pointless and sometimes for fear of retribution. The power is imbalanced, with attackers easily able to shame or frighten their targets, while the victims lack the power to effectively fight back. One of the most damaging outcomes of online harassment for society is the distortion of the public sphere by the silencing of what are often minority voices.
Over the past year, ADL has heard from journalists about increasing levels of online harassment stemming directly from their work covering U.S. political campaigns. Because a free press plays such a key role in the smooth functioning of a democracy, ADL became gravely concerned. Many targeted individuals, like Tablet’s Yair Rosenberg, reported that these incidents involved anti-Semitic language or imagery, along with threats of violence. Some attacks bled into offline abuse, such as when one targeted journalist was contacted by telephone by a “crime-scene cleanup service” hired by attackers, asking when the journalist would like them to come to her home.
ADL responded by creating a Task Force on Harassment and Journalism in June 2016. Last month, the Task Force issued its first report, analyzing the online abuse directed against Jewish journalists over a one-year period. The report details the terrifying extent to which journalists reporting on the 2016 presidential election cycle have been flooded with anti-Semitic abuse on Twitter.
Based on a broad set of keywords (and keyword combinations) designed by ADL to capture anti-Semitic language, there were 2.6 million tweets containing language frequently found in anti-Semitic speech between August 2015 and July 2016. These tweets had an estimated reach of about 10 billion impressions, which could help reinforce and normalize anti-Semitic language on a massive scale.
In fact, at least 800 journalists were targeted by anti-Semitic tweets with an estimated reach of 45 million impressions. The top 10 most targeted journalists (all of whom are Jewish) received 83 percent of these anti-Semitic tweets.
The data didn’t just prove what we at ADL knew to be the case—that anti-Semitic targeting of journalists online was increasing at a terrifying rate—it was a first step in embracing a data-based approach toward conceptualizing and fighting bigotry and anti-Semitism. The advocates and individuals who are trying to curb the spread of online hate need to adapt to the techniques used by today’s hatemongers. We need to be on the cutting edge of technology in order to compete against the technologization of bigotry.
To do this successfully, we need more players in the game. We need designers and user experience specialists focused on enhancing empathy and designing tools to eliminate hate. We need like-minded coders, bloggers, and inventors to create solutions that permit free speech but encourage tolerance, civil discourse, and inclusiveness. Additionally, we need the major players—Twitter, Facebook, Google, Microsoft, and Apple—to double down on their efforts to hack the solution to this perennial problem.
Cleaning up the internet while respecting its norms of free speech is an ambitious quest. So were some of the other challenges we’ve set for ourselves as a society and won, like eradicating polio and landing a man on the moon. The invention of the personal computer and the internet itself are also examples of technological innovations that fulfill ambitious quests. I am confident that with enough focus and innovative thinking, we can find room enough on the internet to promote civility and effectively address the challenge of cyber-harassment without sacrificing fundamental free speech principles.
ADL is doing its part, working to stem the tide of online hate. We began this work over 30 years ago, with some of the first reporting on the use of the internet to convey hate speech. In 2014, we created a set of Best Practices for Responding to Cyberhate, which were widely welcomed throughout the tech industry. We are now taking this commitment to the next level. As ADL’s first Director of Technology and Society, I have been hired to establish ADL’s presence in Silicon Valley. I will be assembling a team to focus on the intersection of human rights and technology, with an emphasis on countering violent extremism, combating cyber-harassment and cyber-bullying, and exploring social justice applications of new technology.
We will create a new Center for Technology and Society, to be a focal point for looking at civil rights and combating discrimination in new ways. As a first step, I will work alongside technology companies to increase awareness and explore ways to increase tools for protecting users from bigotry and discrimination and identifying perpetrators of cyberhate. Even with these ambitious plans, all of ADL’s work will be for naught if we do not go to the root of the problem.
And just this month, ADL is launching an inaugural summit on anti-Semitism—Never Is Now—which will take place in New York City on Nov. 17. At Never Is Now, we will address the underlying causes of cyberhate. I’ll be joined by dozens of brilliant thinkers from across the Jewish and tech worlds to talk about rising anti-Semitism and methods for combating it. I am looking forward to hearing from Steve Coll, dean of the Columbia School of Journalism, and Jonathan Weisman of The New York Times, who will both join me on a panel to discuss how we can respond productively to online hate.
Please consider this an open invitation to join us for the discussion to combat cyberhate, and bring back useful ideas and tactics to your own community.
We cannot win this fight by ourselves. We need a team—and that is where you come in. Help bring about a world without hate by modeling good online behavior. Call out bigotry, discrimination, and hatred when you see it.
Together, while embracing the plurality of ideas and freedom of speech we can build a better world. For me, a victory is more than restricting hateful words. Together we can start by imagining a Twitter without hate. But we at ADL believe we can think even bigger: Together, we can imagine a world without hate.
Taking action against hate is especially important in the context of America’s electoral season. While we may not be able to re-bottle the hatred and reset some of the norms that have been violated, we can draw a new line.
By stepping up to hatred and affirming that our strength lies in our diversity, we can restate and recommit ourselves to the core values that make American leaders known across the globe.
We can help our communities heal. We can create safe online and offline spaces, where we are all able to be respected for our opinions, to do our jobs without harassment, and to truly live and thrive in a world without hate.
Brittan Heller is the Director of Technology and Society at the Anti-Defamation League. She is based in Silicon Valley.