5 Reasons Social Media Platforms Aren’t Quick to Block Racist Accounts 

Here’s why sites like Twitter and GoDaddy are facing tough decisions in the Trump era

  0
Banning Someone on Twitter: A photo of someone typing on a laptop with Donald Trump on the screen looking angry, in the background there is the Twitter logo

(Photo: Getty)

In an age where major presidential decisions first break on Twitter and massive rallies are organized on Facebook, social media has become a key player in what happens offline, putting tech companies in a dangerously powerful position that they are still learning to negotiate.

When neo-Nazi website The Daily Stormer published an article criticizing Heather Heyer, the 32-year-old victim of the white nationalist march in Charlottesville, Virginia, its web hosting company, GoDaddy, cut off service to the site for violating their terms of use, specifically the provision against using its services in a way that “promotes, encourages or engages in terrorism, violence against people, animals, or property.”

The website, which according to the hate-monitoring group the Southern Poverty Law Centre is “dedicated to spreading anti-Semitism, neo-Nazism, and white nationalism,” quickly found a way to get back online by moving their domain over to Google servers. That is, until Google announced that it, too, would be giving the racists the boot, deleting the site within three hours of its registration.

GoDaddy and Google weren’t the only tech companies to take action. In the aftermath of the violent rally, Facebook cracked down on hateful content, banning the “Unite the Right” event page that was used to organize and promote the march, and deleting links to the offensive Daily Stormer article. Even Twitter, which says it prohibits violent threats, harassment and hateful conduct—but hasn’t always reacted quickly to allegations of abuse on the platform—took steps to suspend accounts linked to the Daily Stormer, as well as prominent white nationalist advocates.

This kind of hate speech has surged, both online and off, since Donald Trump took office. According to the Anti-Defamation League, the number of anti-Semitic incidents in the U.S. has spiked 86 percent this year, and reports of hate crimes in the U.S. and Canada are noticeably up since last November’s election.

Having such a divisive president in the White House has created some serious challenges for the powers-that-be on the internet. Mainly, how do you allow for free speech without providing a platform for hate speech?

The rules apply to everyone, even POTUS 

Despite having more than 38 million Twitter followers and holding one of the most powerful positions in the world, Trump is subject to the same rules and regulations as everyone else on Twitter. According to Leanne Gibson, the Head of Agency Development for Twitter Canada, the company will suspend accounts that violate their rules, “whether [or not] they’re notable or verified.”

Of course, it gets more complicated when companies have to decide what actually counts as a violation of those rules. After POTUS tweeted, “Military solutions are now fully in place, locked and loaded, should North Korea act unwisely. Hopefully Kim Jong Un will find another path!” many Twitter users argued that the president’s post breached the platform’s terms of service.

But Twitter’s strong pro-free-speech stance means that they’re unlikely to intervene or block POTUS any time soon, especially given that there was no direct threat towards an individual in Trump’s 140-character attack.

“We are an open platform for expression, conversation and freedom of speech and that is what makes us unique,” says Gibson, adding that “it’s a real opportunity to see all sides. While you don’t have to agree with everything you see on Twitter, and no one is going to, it really is an opportunity to have that order perspective on issues.”

But it does raise an important concern: if the U.S. president can tweet hateful rhetoric and threats without repercussions, is it open season for everyone?

Taking a stand risks rocking the boat

The move to take such a clear stand against racists is new for the tech world. Social media platforms have long been hesitant to arbitrate what is and is not appropriate content.

Twitter has faced repeated criticism for slow reaction times in response to the harassment of their users (often women or minorities) – such as the cruel trolling of Zelda Williams after the suicide of her father, Robin Williams, and the racist attacks on Ghostbuster’s star Leslie Jones, which forced the actress off the platform. But thus far, Twitter has avoided making a clear-cut decision on what is, or is not, offensive. After all, once the company makes a move to ban certain content, they’ve drawn a political line in the sand, and as Gibson explains, “it can become a subjective conversation that really isn’t up to any social media company, really it’s a larger conversation [that should be handled] in Ottawa and Washington.”

Facebook also has a history of being hesitant to take clear action. With more than 2 billion active users, it can be hard to keep everyone happy. Meg Sinclair, the Head of Communications for Facebook Canada, says, “Facebook’s mission is to give people the power to build community and bring the world closer together. This can mean very different things to people from different backgrounds.”

As a result, the social media giant has tried to sidestep responsibility for what gets posted on their social network for years, repeatedly insisting that they’re a platform not a media outlet, and as such, not responsible for what individuals post.

Still, the company has a role in keeping their users safe. Sinclair states, “we don’t allow any behavior that puts people in danger, whether someone is organizing or advocating real-world violence or bullying others.” And, as the aftermath of the violent Virginia march shows, sometimes tech companies need to step in and take a stand.

Deleting accounts doesn’t delete hate

Another challenge with deleting individual accounts is that for every offensive user that gets blocked, more anonymous egg avatars pop up.  “There are repeat offenders who create new accounts after being suspended,” says Twitter Canada’s Gibson. “In our last transparency report that was published this past spring, we note that we suspended more than 370,000 accounts related to terrorism for example, so there’s no question that our priority is all around user safety on our platform while keeping it an open platform for expression at the same time.”

These armies of bots can be used to manipulate online sentiment; in the case of Leslie Jones, repeat offender Milo Yiannopoulos encouraged thousands of followers to attack the actress online. While each of those “eggheads” might only have a handful of followers, together, their collective might can overwhelm the targets of their attacks.

The internet has dark corners

Just because a site such as the Daily Stormer doesn’t appear in a Google search, doesn’t mean it has been squashed or that its supporters have disappeared. When popular platforms block content, the offensive site, or user, tends to move underground—and on the internet, this means relocating to the dark web. Following its expulsion from GoDaddy and Google, the Daily Stormer couldn’t be accessed through search engines, but was reportedly moved to Tor, a browser that allows users to access sites not available on the general web. Tor lets users browse the web anonymously, disguising a person’s identity and online activities, by blocking the original IP address of the computer they are searching from, moving their traffic across different Tor servers, and encrypting that traffic so it isn’t traced back to that individual.

The importance of keeping the net neutral

While many people would agree that companies like Twitter and GoDaddy should shut down accounts that are spreading hate or discriminatory rhetoric, it busts the myth that these tech giants have been desperately clinging to: that technology is inherently neutral. The ideal of net neutrality is that users should be able to access to all content and applications regardless of where it comes from or what it says. While the concept is primarily tied to ensuring that internet service providers can’t play preferential treatment to specific sites or services, it underscores the basic tenet of the internet, that information should be free, and users should have open, fair access to content.

While some might say that their actions are too little too late—after all, the Daily Stormer’s Heather Heyer article was hardly their first offensive post—and others worry about what this new hands-on approach might mean for free speech, one thing is certain: companies can no longer get away with proclaiming that their technologies are “just platforms” or “just tools.” Because even on the internet, with great power, comes great responsibility.

Related:

20 Hilarious Trump Concert Tweets That Will Cheer You Up
How Do I Keep My Job and My Brash Social Media Persona?
Why #NoConfederate Matters “Now More Than Ever” 

Subscribe to Our Newsletter
FLARE - Newsletter Signup

Get FLARE’s Need to Know newsletter for your daily dose of up-to-the-minute fashion, beauty, celebrity and news stories hand-picked by our editors—straight to your inbox. Sign up here.

Filed under:

Comments are closed.