The Section 230 Conundrum

Callum Maddox explains why it’s time we started seeing Social Media platforms for what they are.

With more people accessing news via social media than via newspapers, it’s time for social media platforms to play by the rules. 

Just last week, WhatsApp, a messaging service owned by Facebook, made headlines after users spread a viral disinformation campaign that claimed that ibuprofen made symptoms of COVID-19 worse. Days later, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter and YouTube released a joint statement announcing plans to “jointly combat… fraud and misinformation about the virus”. Facebook announced they would ban listings that sold face masks and, alongside Twitter, verified more than 800 NHS accounts to promote official advice. Similarly, Google announced they would adjust their algorithm to allow easier access to NHS websites and information.

What’s shocking, though, is these platforms are currently under no legal obligation to do this in the most jurisdictions around the world. This is largely due to US laws being particularly weak regarding social media regulation and the financial muscle of these tech firms. In Australia, we only have two real restrictions: that tech companies face fines if they do not remove “abhorrent violent material” and that an eSafety Commissioner can force tech companies to remove harassing or abusive material and fine them if they do not comply within 48 hours. In 2018 this was extended to revenge porn. And yet, these are some of the toughest laws on social media in the world. Many jurisdictions are hesitant to implement tougher laws for fears these tech giants will move their infrastructure overseas, costing domestic jobs and damaging the domestic economy. As such, much of the world is bound to US laws, where the majority of these companies have their headquarters. 

Under US law, social media platforms are treated as providers of “interactive computer services” instead of as “publishers” and therefore can’t be liable for anything published by users of the platform. In 1995 Stratton Oakmont, of Wolf of Wall Street fame, sued Prodigy, a provider of an online bulletin board, after a user posted defamatory messages about it. The firm won the case after proving that Prodigy exercised editorial control over the bulletin board, by way of moderation. They were therefore liable for the speech of their users in the same way a newspaper publisher is responsible for the speech of its journalists.

Enter Section 230. In an attempt to preserve the First Amendment right to free speech, Section 230 of the 1996 Communications Decency Act (DCA) prevents providers of “interactive computer services” from being sued for hosting information provided by their users. This piece of legislation means that if Stratton Oakmont brought the case before the courts today, they’d effectively have no case.

But the implications don’t stop there. Social media platforms such as Facebook and Twitter can’t be held responsible for any harmful activity that their platform allows. A stark example of this occurred when Matthew Herrick’s ex-boyfriend used online dating app Grindr to impersonate Herrick and send unsuspecting men to his house. Under Section 230, Grindr held legal immunity in the civil case Herrick brought against them.

 Lawmakers in the US have already seen a need to address this. They signed the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) into law in 2018. This act means Section 230 does not apply to instances of civil and criminal charges of sex trafficking.  Defenders of free speech, however, have spoken out against this decision. The Electric Frontier Foundation, a digital rights advocacy group, called FOSTA “the most significant rollback to date of the protections for online speech”. 

Despite policies against it, Twitter has been reluctant to ban Nazis on the platform in the interest of “political neutrality”. This has led to calls from former US Vice President and current Democratic candidate, Joe Biden to repeal Section 230 altogether and to hold platforms accountable, suggesting the government should force them to play a more active role in censoring content. 

This has been met with fears that social media platforms will use this opportunity to censor or preference content to serve political and financial ends. Donald Trump has previously attacked social media platforms for “systemic bias against conservatives”, and CNN obtained a copy of a draft executive order from the White House which called for the Federal Communications Commission to examine how the law protects platforms that moderate content on their website. In effect, this could lead to stricter policies and the removal of Section 230 immunity for social media platforms. 

One Republican senator has proposed that an independent organisation should determine whether a large social media platform is politically neutral before allowing them legal immunity under Section 230. The Internet Association, whose members include Facebook and Google, have said that Section 230 allows them to remove harmful content even if it’s politically motivated. They suggest this legislation disincentivises social media platforms from censoring harmful content to prevent losing their status as politically neutral.

The proposal includes mandatory reporting of censorship practises and is a step in the right direction. It’s the first proposed legislation that recognises the influence of social media platforms in censoring or preferencing content to serve their own agendas. However, it does little to encourage them to prevent harmful content from appearing on their sites. 

Facebook CEO, Mark Zuckerberg, perceptively proposes that the government should regulate social media platforms as something in between a telecommunications company and a newspaper. This proposal would ensure freedom of speech is protected but simultaneously prevent the harm caused by the proliferation of fake news or the illegal activity that the platforms facilitate. While FOSTA has had dangerous consequences for sex workers, this is primarily due to the ambiguity of the bill and attempts by conservatives to restrict sex work in a bill aimed at reducing sex trafficking. Governments need to apply more FOSTA-like exceptions to Section 230, including laws that enforce attempts to minimise the dissemination of misinformation.

The key appears to lie in reporting. The German government implemented the NetzDG law in 2018 which forced tech giants to establish methods to review content-related complaints from users, remove illegal content, and report on this twice a year. 

Tech giants don’t want to under-censor for fear that the government will decide that self-regulation doesn’t work and strip them of their Section 230 immunity. But at the same time, they don’t want to be seen as over-censoring their platforms in case lawmakers redefine them as publishers with editorial control which could also cost them their Section 230 immunity. To strike this middle ground, tech giants should be forced to report on their censorship activity to the communications regulators. They need to demonstrate they are taking significant steps to remove harmful content and fake news but also show that they are remaining politically neutral in their provision of what is essentially a virtual public sphere. 

What exactly constitutes harmful content and politically neutral is for another discussion but in the meantime it’s time we started seeing social platforms for what they are: these platforms cannot be allowed the influence of newspapers without the responsibility.