After the 2024 presidential election, discontent about social media itself began rising among users, many of them hailing from communities of color, who were frustrated over algorithms and policies they say stifle their voices and content.

Many of those fleeing X, formerly known as Twitter, are protesting owner Elon Musk lending his prodigious funds and social influence to President-elect Donald Trump’s campaign. Discontent over Musk’s leadership of X had been brewing before the election. His moves to loosen the platform’s moderation standards and to decimate its Trust and Safety team have led to a documented increase in hate speech and far-right content. 

More than 116,000 people deactivated their X accounts on the day after the election, the most deactivations in a single day since Musk took over the platform.

As MSNBC host Joy-Ann Reid, who left X on Nov. 13, said, “I just realized that it’s not really worth it, because … you have to wade through a lot of dreck and a lot of abuse and a lot of negativity.”

Meanwhile, amid continued complaints from users about TikTok and Meta throttling their posts, a growing number of content creators from marginalized communities are similarly taking stock of whether their social media habits align with their values. 

An increasing number are turning to alternative platforms such as Bluesky and Fanbase, among others. Those who remain on mainstream platforms have figured out ways to resist by tricking the algorithms into allowing their speech to remain free and unfettered in these spaces.   

Suppressed voices in social spaces

A long-running critique of mainstream platforms such as TikTok and Meta, which operates Facebook, is that their algorithms filter out discussions about race and racism across the board, whether hateful or not. Users from marginalized communities and antiracist organizations have bristled at these clumsy algorithms for more than a decade.

Creators of content supporting Black Lives Matter have repeatedly accused platforms like TikTok of unfairly suppressing their content, even after TikTok pledged to support Black creators after a “technical glitch” supposedly made it “appear” as though it was hiding posts with the hashtags #BlackLivesMatter and #GeorgeFloyd. 

In 2021, Black creators on TikTok went on “strike,” partially due to concerns that the platform was disproportionately shadow banning marginalized communities.  

“Similar to the ways off the app Black folks have always had to galvanize and riot and protest to get their voices heard, that same dynamic is displayed on TikTok,” creator Erick Louis told the New York Times during the “strike.” “We’re being forced to collectively protest.” 

Due to the platforms’ caginess around the particulars of their content moderation algorithms, it can be difficult to systematically verify these claims of biased suppression. However, as reported by The Intercept, leaked documents suggest that TikTok has previously instructed moderators to suppress uploads from users who are visibly disabled, “ugly,” elderly, or who record videos in “dilapidated housing.”

Representatives from X, TikTok, and Meta did not respond to The Emancipator’s request for comment.

‘Algospeak’ as social media subversion

Apart from boycotts and going to the media, users from marginalized backgrounds have developed slang, known in scholarly literature as “algospeak,” to circumvent moderation algorithms and bypass enforcement. Popular examples from the lexicon include “yt” for the term “White,” and “le$bean” for “lesbian.” Algospeak can extend to emojis, such as ✋🏻 to indicate “White.” Some creators have even taken to displaying their palms in reference to the emoji, as another workaround for “White.”  

There are also signs that major platforms generally want to minimize politically charged discussions around race and identity on their sites. Meta announced this year that it will stop recommending political content to users from any accounts that said user does not follow on two of its other platforms, Instagram and Threads.

“We have never seen a clear definition of how ‘political’ content is defined by Meta,” the social media team for the national Black Lives Matter Foundation wrote in an email to The Emancipator, also asserting that “it is very unclear — but likely — that anti-racist content or other BLM focused content is being impacted and down ranked by this change.” 

In a bid to avoid controversy, social media companies may simply not have much motivation to refine their algorithms to distinguish between toxic and constructive posts about race, as it would be easier to suppress these posts across the board.

“A lot of people that I interviewed said that TikTok just wants happy families dancing and cooking together,” said Kathryn Yurechko, an Oxford University researcher who has conducted studies interviewing people who use algospeak. “So there’s the question of whether the algorithms are really bad at context, or are they purposefully biased?”  

New spaces, freer speech

Emerging platforms with stricter moderation policies and less political baggage seem to be reaping the rewards from this season of mainstream social media discontent. For example, in the 18 days following the election, microblogging site Bluesky — an alternative to X — saw a 300% increase in the number of daily users. 

These newer platforms now have the opportunity to siphon off users from the likes of X or TikTok by not only having stricter hate speech policies, but also by designing more nuanced content algorithms that are more sensitive to productive discussions around identity, making algospeak less necessary. 

Saadia Gabriel, a computer science professor at UCLA, aims to design such an algorithm. One technical fix that she has been researching: improving the datasets that train algorithms. These datasets often consist of millions of posts that people have labeled as being permissible or hateful, so that the algorithm can learn what toxicity looks like. 

However, social media companies often crowdsource such annotation efforts, and the people doing the labeling aren’t necessarily aware of the cultural context in which others are posting this content. What some classify as hateful speech in one situation may constitute what others consider to be reclaimed language or a constructive discussion.

A study conducted by Gabriel and others found that the datasets that X has used tend to associate African American English dialect (AAE) with toxicity, which leads the algorithms to propagate bias.

“You have pretty skewed populations of people who can actually do the annotation — a lot of times you don’t get a very good representation from the people who are actually being targeted by hate speech, and who use reclaimed language,” Gabriel said. “If you explicitly recruit people who are represented by the data in terms of being members of these groups that are being targeted, then that can change things.” The study also found that simply priming people to think about the race of the content creator led to more nuanced annotations.  

Another improvement Gabriel suggested was to train algorithms to consider the demographics of the content creators, which could be an indicator that they are using reclaimed language or describing experiences with racism. 

Gabriel found that algorithms have erroneously flagged victims of hate speech who were detailing their racist encounters. Gabriel and her fellow researchers have had success in determining identities by using dialects like AAE as a proxy for race.

Metadata and information about the demographics of the area from which a user is posting can also supplement these efforts. Identifying the race of a user does have privacy implications, though it might not be all that different from preexisting practices.

Another strategy might be to be proactive about communicating the platform’s priorities to welcome users from marginalized backgrounds.

“They have an opportunity to make themselves stand out and not repeat … the suppression that we see on these more established platforms,” said Nadia Karizat, a University of Michigan researcher who examines algospeak. “For a new platform to deliberately from the start have it be a central value … would be ideal.” 

One video-based platform that seems to be taking this tact is Fanbase, which bills itself as a Black-owned social media company that gives creators 50% of the revenue generated from selling subscriptions and bonus features — more than double the revenue share that users on Patreon and OnlyFans receive. 

Fanbase founder Isaac Hayes III — son of legendary musician Isaac Hayes Jr. — said in a Mic interview that the company has been taking particular care to ensure that its algorithms do not shadow ban people of color or LGBTQIA+ users. Fanbase spokesperson Noah Washington told The Emancipator that the platform is partly able to prevent biased enforcement by relying on subscriptions rather than ad revenue. Advertisers on other platforms have configured their campaigns to ensure that their ads do not appear alongside content containing keywords such as “Black people,” “BLM,” or “George Floyd.” This tendency exerts pressure on platforms to deprioritize such content more generally.  

“Facebook and Instagram are bound by the advertisers, versus Fanbase, [which] is bound by the creators,” Washington said. Fanbase also has an explicit political viewpoint, promoting itself as a product of Barack Obama’s 2012 JOBS Act, and positioning itself against Musk and Trump.

“Unfortunately, Instagram and Facebook are all very neutral platforms. There is no platform that acts as a resistance fighter,” Washington said. “Fanbase is that place, whereas Trump and Elon have made their social platforms conservative leaning.” Fanbase hopes that these kinds of clear statements of values will attract a demographically diverse audience.  

It will not be easy for emerging platforms to compete with legacy companies that have amassed huge user bases. Creators on established platforms also usually work years to build large audiences, and luring followers to a new network is difficult, so starting from scratch elsewhere would require them to relinquish the fruits of their efforts.

“Transitioning to these platforms poses challenges, including rebuilding our audience and maintaining engagement. While these platforms offer stronger moderation policies, they currently lack the reach of Twitter (X),” the Black Lives Matter social media team told The Emancipator when asked whether they would ever leave X for a different social network. They noted, “Throughout Black history, we’ve rarely had the luxury of walking away when faced with discomfort or inconvenience.”

Creative Commons License

Republish our articles for free, online or in print, under a Creative Commons license.