Throughout the past year, social media companies have rolled back their hate speech policies and scaled back the teams responsible for moderating this content, following in the steps of Elon Musk and Twitter, now known as X. As a free speech lawyer and former senior policy official at Twitter and Twitch responsible for writing and enforcing the companies’ hate speech policies, I have been concerned that this approach is not only ahistorical, but that it might allow unrestricted hate speech to flourish on these platforms. I fear that this hate speech might cause serious harm to marginalized populations, if not deadly violence or even genocide.

Over the past week and a half, as the Israeli-Hamas war has unfolded, I have watched in horror as my fears about hate speech and violence have begun to come true. Social media platforms, now stripped of their rules and enforcement, have proliferated with harmful misinformation and surged with hate speech toward both Muslims and Jews. As the FBI warned of violent threats, a six-year-old was targeted in a hate crime and stabbed 26 times by his landlord near Chicago for being Palestinian. Without serious enforcement of anti-hate speech policies on social media, this danger might only grow.

The link between speech and genocide might seem like a stretch, but research has shown the danger of hate speech when amplified by mass media. In the 1900s, the first blockbuster movie, Birth of a Nation, explored the history of the Civil War and Reconstruction, but was also accused of romanticizing racism and glorifying racial violence by the National Association for the Advancement of Colored People. After the film was shown across the country, the KKK’s numbers only swelled, allowing the organization to spread beyond its Southern base, and setting the stage for the deaths of thousands of Black Americans.

The link between dehumanizing hate speech and genocide only continued in the era of radio. In Rwanda, the private radio station Radio Télévision Libre des Mille Collines (RTLM) was created in the early 1990s to incite violence against the Tutsi ethnic minority. It aired programs that called Tutsis “cockroaches’’ and “snakes,” and asked the Hutus, Rwanda’s ethnic and political majority, to “exterminate” and “cut down” the Tutsi people. Almost a million Tutsis died during the conflict, and in 2003, an international court convicted three RTLM executives of genocide for their role in inciting violence through hate speech on the radio.

Hate speech on social media has long had devastating consequences. In 2022, Amnesty International research showed how hate speech on social media substantially contributed to the genocide of the predominantly Muslim Rohingya ethnic minority in Myanmar. Although hate speech was already prohibited by Facebook and Twitter, social media feeds in the country were flooded with posts calling the Rohingya “invaders” and “dogs” that needed “to be shot.” Nearly 10,000 Rohingya were killed and almost one million were forced to flee the country. Facebook took the majority of the blame due to its large presence in Myanmar, but the UN’s Independent International Fact-Finding Mission on Myanmar said that multiple social media platforms played a significant role in the atrocities. More importantly, an irrefutable direct link between hate speech on social media and deadly violence offline was made.

Rohingya refugees are placed in a temporary shelter for medical check up and food in Matang Pasi village, Bireun district, Aceh province on Oct. 16, 2023, before being transported to a refugee camp.
Rohingya refugees are placed in a temporary shelter for medical check up and food in Matang Pasi village, Bireun district, Aceh province on Oct. 16, 2023, before being transported to a refugee camp. Credit: AFP via Getty Images

To be sure, regulating hate speech is hard, and defining hate speech can be subjective, especially on social media. But I know from experience that it’s possible. In 2021, my team and I used international human rights standards to update Twitter’s rules about hate speech. The results were swift. Six months after the new rules were implemented, the company reported a 77% increase in the number of accounts penalized for hate speech on the platform.

Now, with a deadly hate crime already being committed in the U.S. and talk of a future genocide unfolding before our eyes in Gaza, it’s time to stop the rollback of these hate speech protections. Individuals and advocates should once again demand that advertisers apply pressure on social media companies to reinstate their hate speech policies in line with human rights standards and immediately reinvest in the safety teams who moderate this content.

These decisions, however, should not be dictated by the whims of CEOs like Musk or the ultra-partisan whims of governors in Florida and Texas, two states with cases currently before the Supreme Court about what is shown on social media platforms. Rather, people should encourage their representatives to support a bipartisan proposal from Senators Elizabeth Warren and Lindsey Graham that would take steps toward reining in the power of social media companies through an independent regulatory body.

Social media is the most powerful medium of our day, but hate-filled ideas on platforms can spread in an instant – in a way that has already been dangerous, even fatal, to marginalized communities. It’s time we stand up for the safety of these groups, demanding that platforms take greater measures to monitor hate speech, and advocating for federal regulations on social media policies to promote long-lasting change before it’s too late.

Creative Commons License

Republish our articles for free, online or in print, under a Creative Commons license.

Anika Collier Navaroli is a Senior Fellow at the Tow Center for Digital Journalism at Columbia University and a Public Voices Fellow on Technology in the Public Interest with The OpEd Project in partnership with The MacArthur Foundation. She previously held senior policy official positions at Twitter and Twitch. In 2022, she blew the whistle about her warnings to Twitter that went unheeded leading to the January 6th attack on the Capitol and the platform’s ultimate decision to suspend former President...