The internet isn’t fair – and Section 230 can help
It seems like every time an online service like Facebook or Twitter publishes something unpopular, there is talk of repealing “Section 230”. And because big internet services are generally unpopular, it’s tempting to support anything that would take them down a notch. But what does Section 230 actually say and would changing it help or hurt the online public?
To answer the first question: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
That’s it, just 27 words. The accepted meaning of which is that services like Facebook and Twitter (and significantly, lots of smaller services) can not be held liable for content on their platforms that was written by someone else. While this may sound like a license to slander, that’s not the actual effect. To understand why, look at this pre Section 230 example from the article Why Section 230 exists and how people are still getting it wrong.
Then we get to these early internet services like CompuServe and Prodigy in the early ‘90s. CompuServe is like the Wild West. It basically says, “We’re not going to moderate anything.” Prodigy says, “We’re going to have moderators, and we’re going to prohibit bad stuff from being online.” They’re both, not surprisingly, sued for defamation based on third-party content.
CompuServe’s lawsuit is dismissed because what the judge says is, yeah, CompuServe is the electronic equivalent of a newsstand or bookstore. The court rules that Prodigy doesn’t get the same immunity because Prodigy actually did moderate content, so Prodigy is more like a newspaper’s letter to the editor page. So you get this really weird rule where these online platforms can reduce their liability by not moderating content.
That really is what triggered the proposal of Section 230. For Congress, the motivator for Section 230 was that it did not want platforms to be these neutral conduits, whatever that means. It wanted the platforms to moderate content.
Somewhat counter-intuitively, Section 230 is the law that allows services to moderate their users, without getting sued. That they usually do a poor job of it is a separate issue. In the Electronic Frontier Foundation’s excellent (if long) article, Section 230 is Good, Actually, they write:
The misconception that platforms can somehow lose Section 230 protections for moderating users’ posts has gotten a lot of airtime. This is false. Section 230 allows sites to moderate content how they see fit. And that’s what we want: a variety of sites with a plethora of moderation practices keeps the online ecosystem workable for everyone. The Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more permissive.
Maintaining a level playing field
The EFF article goes on at some length about how smaller services are enabled by the protections of Section 230.
Unfortunately, trying to legislate that platforms moderate certain content more forcefully, or more “neutrally,” would create immense legal risk for any new social media platform—raising, rather than lowering, the barrier to entry for new platforms. Likewise, if Twitter and Facebook faced serious competition, then the decisions they make about how to handle (or not handle) hateful speech or disinformation wouldn’t have nearly the influence they have today on online discourse. If there were twenty major social media platforms, then the decisions that any one of them makes to host, remove, or fact-check the latest misleading post about the election results wouldn’t have the same effect on the public discourse.
Put simply: reforming Section 230 would not only fail to punish “Big Tech,” but would backfire in just about every way, leading to fewer places for people to publish speech online, and to more censorship, not less.
While it may feel at times that Big Tech companies are the only entities on the internet, that really isn’t the case. Smaller publishers abound, taking advantage of the amazing reach and low barriers to entry on the internet. Just one of many examples of these publishers being protected by Section 230:
In 2018, a spreadsheet known as the “Shitty Media Men List,” initially created by Moira Donegan, gained recognition for containing individuals which were suspected of mistreatment of female employees. A defamation lawsuit against Donegan was brought by the writer Stephen Elliott, who was named on the list. But The Shitty Media Men list was a Google spreadsheet shared via link and made editable by anyone, making it particularly easy for anonymous speakers to share their experiences with men identified on the list. Because Donegan initially created the spreadsheet as a platform to allow others to provide information, Donegan is likely immune from suit under Section 230. The case is still pending, but we expect the court to rule that she is not liable.
Are there problems with the way speech is moderated online? You bet! The sheer scale of content pushed onto social media every day by humans and non-humans alike might make the job of preventing false or hateful speech impossible. But keeping the responsibility where it belongs, with the operators of these platforms, is enabled by Section 230. The final word from the EFF:
Reforming online platforms is tough work. Repealing Section 230 may seem like the easy way out, but as mentioned above, no reform to Section 230 that we’ve seen would solve these problems. Rather, reforms would likely backfire–increasing censorship in some cases, and dangerously increasing liability in others.
If you feel like supporting the free flow of online information, the Wikimedia Foundation is currently fundraising (I kicked in a few bucks) and the Electronic Frontier Foundation is celebrating their 30th anniversary.
Note: this post first appeared in the weekly Webdancers Newsletter. If you’d like to see more like this in your inbox, please subscribe.