The Unknown Future of Section 230

By Hannah Gillihan

When the internet was created in 1983, the goal was to create a neutral, decentralized network that would allow any devices that connected to it to interact with each other freely. As the internet grew over the next decade, that goal would be jeopardized by the rise of private internet services, and over 300 million users, and the liability issues regarding free speech on this new, unfamiliar and unregulated platform.

On Thursday Feb. 25th, the McCandlish Phillips Journalism Institute, in partnership with the King’s College and the Acton Institute, held a virtual webinar called SkepTech 2021 featuring journalist David French, and panelists Dr. Mary Anne Franks, of the University of Miami, Scott Lincicome, of the Cato Institute, and Al Sikes, former chairman of the FCC, who talk about the issue regarding Section 230 and free speech on the internet.

Up until 1996, Internet services and online forums were required to have knowledge of and be responsible for, virtually everything published on their sites, even from users themselves. These companies essentially had to read every single thing published on their site, and have specific legal protections for various content, or they were going to be held responsible for everything on their site. But 300 million users across various platforms was too large of a number to read and protect, so often times, these services would shut down any user-publishing elements to their site. It was an all-or-nothing game.

In 1996, the Federal Communications Commission passed Section 230, as part of the Communications Decency Act, to combat this issue of online liability and moderation. The regulation states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This essentially means that every internet service — from social media to online retailers — could not be held liable for any content a user publishes that is illegal. Though, this regulation has recently caused some controversy related to the Insurrection at the Capitol on January 6th, and President Trump’s use of twitter.

David French, of the SkepTech 2021 webinar, argued that because the organization of the Insurrection happened on social media, many people are pushing for Twitter, Facebook, Reddit, etc. to proactively moderate and remove anything that may be considered hate speech or anything that would cause harm like this event did — some are even going as far as to blame the sites for enabling the behavior. President Trump was finally banned on most social media platforms, after many demanding these sites remove his platform for spreading misinformation, various accusations, and some would go as far as to say that he incited the violence. What many are pushing for, is either the revision of, or the removal of Section 230.

But French says that is really difficult, because with the emergence of ‘Big Tech’ came the emergence of “ideological monocultures,” where people base their objections and opinions purely on their personal and subjective ideology. This makes compromise and work incredibly difficult.

The right’s biggest issue regarding Section 230, is that they are pioneers for free speech and want much less moderation and censorship than already exists, to ensure their First Amendment right is held up. The left, on the other end of the spectrum, almost wants way too much censoring and moderation, primarily to stop the spread of hate, harm, and misinformation that can often come from the right. These are two issues regarding problems with social media that are breeding hate and resentment, so how do we fix it?

Scott Lincicome argues that there is no good reform to Section 230 that would satisfy both sides without upending the internet as we know it, as of now. If there is too much censorship in the hands of these social media sites, the result could be limited, echo-chamber-like speech that only publishes what it agrees with. On the other hand, not enough censorship turns the internet into a bad place, filled with things like child pornography, severe misinformation, harassment, etc. but trying to define the parameters for what is considered “harm” is too difficult to do in an ever-changing culture like ours.

Dr. Mary Anne Franks, however, believes that it is the law’s job to define terms such as “harm” so that we may better understand what is and is not allowed, just as it was done for the First Amendment. Government is not a “one size fits all,” and objective versus subjective issues are extremely difficult to navigate because there is so much you need to prove, so the defining of terms and revision of their meanings for Section 230 is imperative to fixing the issue we are facing.

Sites like Twitter and Instagram are working hard to combat a lot of these issues by banning Trump from having accounts, putting banners up on tweets that may contain misinformation, and actively monitoring for any hate-speech, though some wish they were being more proactive, and catch these issues before they happen. Section 230 protects them from being held responsible for their users’ content, though they are seeing some of these bad things, and allowing them to happen regardless. Should social media sites have more regulations regarding some of the aforementioned issues?

In essence, the issue of either revising Section 230, or completely removing it, is tricky. Removing Section 230 entirely would open the floodgates to a myriad of issues on social media, especially regarding hate speech, misinformation, bullying, organized violence, and much more. Though revising it may be just as difficult, because there are so many terms to define and each side has subjective issues. The result may be an un-American, overly censored internet. But, the fight is not over and compromise is still possible. Only time will tell what the future holds for the internet and Section 230 as we know it.

Hannah Gillihan is a Journalism major at The King's College in New York City.