There’s at least one thing that Joe Biden and Donald Trump seem to agree on: that federal law gives unfair legal immunity to technology giants.
In an interview with The New York Times published in January, Biden argued that “we should be worried about” Facebook “being exempt” from lawsuits. The Times, he noted, “can’t write something you know to be false and be exempt from being sued.” But under a 1996 law known as Section 230, Biden claimed, Facebook can do just that.
“Section 230 should be revoked immediately,” Biden said.
Just last month, Trump very publicly expressed a similar view.
“Social media giants like Twitter receive an unprecedented liability shield based on the theory that they are a neutral platform, not an editor with a viewpoint,” he said during an Oval Office signing ceremony for an executive order designed to rein in big technology companies.
They aren’t the only politicians who feel this way, of course. Within days of the president’s executive order, after Twitter applied a “fact check” to one of Trump’s tweets, Sen. Josh Hawley (R-Mo.) raised this issue in a letter to Twitter CEO Jack Dorsey.
“Twitter’s decision to editorialize regarding the content of political speech raises questions about why Twitter should continue receiving special status and special immunity from publisher liability under Section 230 of the Communications Decency Act,” Hawley wrote.
There’s a sliver of truth to these descriptions of Section 230. The law really does give broad immunity to websites that wasn’t available to anyone before the Internet. But all three comments fundamentally misrepresent how Section 230 works.
Biden is wrong to suggest that Section 230 treats Facebook differently from The New York Times. If someone posts a defamatory comment in the comment section of a Times article, the company enjoys exactly the same legal immunity that Facebook gets for user posts. Conversely, if Facebook published a defamatory article written by an employee, it would be just as liable as the Times.
Meanwhile, Trump and Hawley are wrong to suggest that Section 230 requires online platforms to be neutral. In reality, the law was written to encourage, not discourage, online platforms to filter user-submitted content. It has no requirement for neutrality—political or otherwise.
But while these criticisms of Section 230 miss the mark, others have raised legitimate concerns about the extraordinary breadth of Section 230. Bad actors have used Section 230 as a shield for a lot of genuinely abhorrent behavior, and a growing number of critics are calling for the law to be curtailed.
Section 230 fixed an emerging problem in the law
To understand Section 230, you have to understand how the law worked before Congress enacted it in 1996. At the time, the market for consumer online services was dominated by three companies: Prodigy, CompuServe, and AOL. Along with access to the Internet, the companies also offered proprietary services such as realtime chats and online message boards.
Prodigy distinguished itself from rivals by advertising a moderated, family-friendly experience. Employees would monitor its message boards and delete posts that didn’t meet the company’s standards. And this difference proved to have an immense—and rather perverse—legal consequence.
In 1994, an anonymous user made a series of defamatory statements about a securities firm called Stratton Oakmont, claiming on a Prodigy message board that a pending stock offering was fraudulent and its president was a “major criminal.” The company sued Prodigy for defamation in New York state court.
Prodigy argued that it shouldn’t be liable for user content. To support that view, the company pointed back to a 1991 ruling that shielded CompuServe from liability for a potentially defamatory article. The judge in that case analogized CompuServe to a bookstore. The courts had long held that a bookstore isn’t liable for the contents of a book it sells—under defamation, obscenity, or other laws—if it isn’t aware of the book’s contents.
But in his 1995 ruling in the Prodigy case, Judge Stuart Ain refused to apply that rule to Prodigy.
“Prodigy held itself out as an online service that exercised editorial control over the content of messages posted on its computer bulletin boards, thereby expressly differentiating itself from its competition and expressly likening itself to a newspaper,” Ain wrote. Unlike bookstores, newspapers exercise editorial control and can be sued any time they print defamatory content.
The CompuServe and Prodigy decisions each made some sense in isolation. But taken together, they had a perverse result: the more effort a service made to remove objectionable content, the more likely it was to be liable for content that slipped through the cracks. If these precedents had remained the law of the land, website owners would have had a powerful incentive not to moderate their services at all. If they tried to filter out defamation, hate speech, pornography, or other objectionable content, they would have increased their legal exposure for illegal content they didn’t take down.