Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

MARCHL: Democratize Social Media Regulation

MARCHL%3A+Democratize+Social+Media+Regulation

Following the violent riots at the U.S. Capitol on Jan. 6, many technology companies banned a number of extremist and conspiratorial accounts. Twitter banned over 70,000 QAnon accounts; Amazon Web Services booted the conservative platform Parler; and, most famously, Twitter kicked out the most notorious tweeter in U.S. politics, former President Donald Trump.

The decision to deplatform so many notable accounts was received by the American people with mixed reviews. While many recognized extremists and spreaders of disinformation should not have the power to normalize their nefarious activities, others raised concerns regarding free speech and the power of Big Tech. The difficulty of this conversation lies at the nexus of security, free speech and the growing power of Big Tech companies in the United States. 

Ultimately, deplatforming is effective in reducing disinformation and radicalization on the mainstream internet, but everyone on the internet must have a broader conversation about the potential problems that arise when only a handful of giant tech companies have this amount of power in directing political discourse.

Following the mass bans of January 2021, the issue of deplatforming was widely debated by citizens and their representatives, but the practice itself is by no means unprecedented. Starting in 2016, platforms like Facebook and Twitter started banning controversial right-wing extremists such as Milo Yiannopoulos and Laura Loomer

Richard Rogers, a professor at the University of Amsterdam, found that when influencers from the “alt-right,” a white nationalist movement, lose their platforms, their audiences shrink, their rhetoric calms down and their revenue streams dry up, effectively halting their business model of profiteering off of hate and radicalization. In other words, deplatforming is effective in limiting the spread of extremism online.

In addition to banning influencers, social media companies also deplatform terrorists, as these platforms offer space for extremist organizations to spread their propaganda and recruitment messages to unsuspecting users. Reddit recognized that alt-right leaders planned much of the deadly 2017 march in Charlottesville, Va., on its website and therefore shut down many of the offending subreddits. 

The Islamic State group, for its part, is adept at hijacking trends on Twitter in order to expose millions of people worldwide to its recruitment messages. A study conducted by the Brookings Institution showed that when Twitter began suspending IS accounts en masse, its propaganda no longer trended and its recruitment slowed, seriously hampering the growth of its network. It is too early to tell whether January’s mass banning of far-right accounts will show the same effects, but these conclusions are encouraging.

Despite a mostly successful record, there are practical downsides to deplatforming. The practice aims to clean a specific platform of any hateful or dangerous content, not to disengage people from extremist beliefs. As a result, users holding these views will eventually migrate to low content-moderation platforms such as Telegram, MeWe and 8Kun, which are populated with like-minded users. This migration also makes extremist networks more difficult to track, as researchers must start from scratch. Platform migration facilitates quicker radicalization that is not necessarily being monitored and can have dangerous effects. 

More importantly, deplatforming engenders a serious conversation about expression and democracy in the United States. The debate surrounding deplatforming often invokes concerns of free speech. This rhetoric misunderstands the First Amendment’s protection of Americans from government encroachment on their right to speech; the amendment says nothing about private companies. If anything, tech companies have the First Amendment right to set up editorial policies as they see fit.

The real issue for free expression lies within the reality that just a handful of poorly regulated companies — led by billionaires unrepresentative of and unaccountable to the populations they serve — bear so much responsibility in deciding what is and is not acceptable in the marketplace of ideas. Though Trump’s hate and incitement certainly warranted suspension, Twitter and Facebook set a monumental and exploitable precedent when they unilaterally and effectively removed a democratically elected leader from their platforms.

If we, as citizens, students and social media users, are serious about handling the problem of extremism in the United States, then we must be willing to discuss the fact that an unelected, concentrated force of tech executives currently has as much sway as a democratically elected administration. 

First and foremost, tech companies must be far more transparent and willing to communicate how they collect data to deliver personalized content, because those same techniques are being used to deliver targeted propaganda. Additionally, they must be clear in their communication about how they exercise content moderation. 

Facebook has created the Oversight Board, a supreme court of sorts composed of a Nobel Laureate and former prime minister, among others, which decides what can and cannot be said on the platform. The intention is correct, as it aims to take the sole responsibility away from the CEO. But the board, no matter how wise its members, is neither elected nor representative of Facebook’s almost 3 billion users. A private company is still making decisions that affect people who are not users of that service themselves, and their decision-making process is less than transparent. 

Ultimately, the issue of extremism in Big Tech will only be solved when these companies actually commit to transparency and are willing to engage in fact-based conversations free of special interests. These tech companies will not adequately regulate themselves, and so it is on us to educate ourselves and advocate for clear and precise guidelines that will make Big Tech respectful of democractic norms and imperatives. 

Lea Marchl is a junior in the School of Foreign Service. Combatting Conspiracies appears online every other Friday.

View Comments (1)
Donate to The Hoya

Your donation will support the student journalists of Georgetown University. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

More to Discover
Donate to The Hoya

Comments (1)

All The Hoya Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *

  • B

    BCMar 22, 2021 at 8:37 am

    “The real issue for free expression lies within the reality that just a handful of poorly regulated companies — led by billionaires unrepresentative of and unaccountable to the populations they serve — bear so much responsibility in deciding what is and is not acceptable in the marketplace of ideas.”

    Excellent article, Lea. It echoes Macron’s comments on the Trump Twitter ban: despite the politics, an undemocratic entity should not have such power to decide who gets de-platformed and who doesn’t. The First Amendment can’t hold them legally liable for doing so, but that doesn’t mean they’re protected from criticism. And maybe, in the future, that criticism will materialize to proper regulation.

    Reply