https://o.aolcdn.com/images/dims?quality=85&image_uri=https%3A%2F%2Fo.aolcdn.com%2Fimages%2Fdims%3Fresize%3D2000%252C2000%252Cshrink%26image_uri%3Dhttps%253A%252F%252Fs.yimg.com%252Fos%252Fcreatr-uploaded-images%252F2020-01%252F2f3fac20-4417-11ea-be56-7958695a43a6%26client%3Da1acac3e1b3290917d92%26signature%3Dca6e61c89a8cb48895245e99134cb7e2c7547275&client=amp-blogside-v2&signature=1673279a51d61c3368b945c8562602abe22cb9ca
Shutterstock

What happens if the internet’s most important law disappears?

Wave goodbye to the comment section of the internet.

by

Buried deep in the 1996 Telecommunications Act is a tiny clause that underpins everything we do online. It's often described as the 26 words that created the internet -- and with very good reason. Every email you send, social media post you make and review you submit, you do so under this law's protection, after a fashion. And now, it's under threat.

Both Republicans and Democrats are suggesting that the protections this clause offers are too broad to be sustainable. The current administration already weakened it, carving out exceptions for adult content under the auspices of FOSTA/SESTA. Republican Senator Ted Cruz has either misspoken or misrepresented the law to encourage its removal. And senior representatives have refused to testify in support of the law when asked to do so by key committees.

Three front-runners for the Democratic Party nomination are all targeting the law, too. Former Vice President Joe Biden told The New York Times that, if elected, he would see the law "revoked, immediately." Senator Bernie Sanders has pledged to reform the law, while Senator Elizabeth Warren is pushing for wider reforms of the technology industry altogether.

Communications Decency Act 1996, 47. USC § 230

(c) Protection for ''Good Samaritan'' blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

The law is Section 230 (or s230), Communications Decency Act 1996, which insulates platform holders from legal reprisals based on the things we say and do online. Think of it as a near-universal get-out-of- jail-free card for websites that host content that may be defamatory or obscene.

Dr. Corynne McSherry, legal director at the Electronic Frontier Foundation, did testify before the house Energy and Commerce Committee last year to defend s230. She explained: "If you have ever forwarded an email -- whether a news article, a party invitation or birth announcement -- you have done so with the protection of Section 230."

Section 230 is based on legal principles that date back to an obscenity case from the '50s, in which a California bookstore owner was sued for the content of a book they sold. The Supreme Court found that it would be impossible for the owner to have read every title in their store. So while there would be an issue if they knew about the obscene material, it would be very difficult to prove that they did, and to hold them to account for it.

Two legal cases in the early '90s muddied the situation, prompting two senators to sponsor a law to clarify the role of web platforms. Section 230 was the outcome and essentially applied the bookstore rule, even if that wasn't the original intention of its creators. (They had hoped to encourage proactive moderation but allow protections should they miss something.)

If Section 230 is killed without proper thought to what comes next, then big chunks of the internet will become unusable. Dr. McSherry, in testimony, said that platforms like Facebook would have to use "extreme caution in their moderation" to limit their own liability. That would mean censoring everything and anything that could prompt a legal challenge or shutting down comment threads entirely.

Professor Jeff Kosseff is author of the book The Twenty Six Words That Created the Internet and an expert in s230. He believes that killing it off will provoke a flurry of cases for every major site, saying that "Facebook will be sued a lot." And this case law will likely decide the ultimate fate of the internet in the absence of statute. "The problem is you don't have many [legal] cases," he told Engadget, "because Section 230 is such a strong defense."

"There are other platforms than Facebook," said Kosseff, "and Facebook probably won't be harmed as much by [the] repeal." Smaller sites, which "don't have the ability to absorb the litigation costs like Facebook does," and lack the money to implement comprehensive moderation, will be in serious jeopardy. McSherry said that any repeal would force sites to take a heavy-handed approach, removing "far more speech" than necessary.

And we've already seen glimpses of this with the fallout from FOSTA/SESTA, which forced platforms to mass-censor adult content. Because of the legal risk inherent with hosting the material, many sites issued blanket bans, like Tumblr, which saw its user numbers (and value) plummet in the process. YouTube demonetized and suppressed educational material for LGBTQ teens. Even Instagram was found to have blocked a feminist newsletter from advertising because it intimated the publication was pushing an escort service.

It's been suggested that withdrawing s230 will be less problematic now because it's possible to automate much of the content moderation. The tale of Facebook's very human moderators put paid to that idea and, for now at least, automation clearly isn't going to work for many cases.

AI expert Dr. Kate Devlin at Kings College, London, says that "AI carries biases, lacks nuance and is very bad at determining context." She added that "we're already seeing the effect of blanket decisions, like Facebook banning nipples, but also breastfeeding pics."

The end result, however, is that any website that relies on user-generated content, from YouTube through to Goodreads, is in trouble. "Say Yelp gets a complaint from a restaurant that got a one-star review," said Kosseff "and says that it's inaccurate. With Section 230, Yelp can do whatever it wants with that," but without it, "Yelp is in a lot of trouble if it keeps it up." In that situation, it has two choices: fight the onslaught of legal cases from bad reviews or take the content down. The end result is simple, "Yelp starts losing all of its negative reviews, and Yelp isn't incredibly valuable if all it has is five-star reviews."

And when those cases came to court, the future of the internet would be left in the hands of potentially partisan judges. "There's no way to know with certainty how courts would interpret [the law]," he said, adding "a lot of it would depend on which judges got to the cases first." In many regards, luck is a key factor, "One of the reasons Section 230 has been such a strong defense is that the first federal appellate court judge [...] was a strong free-speech advocate who used to be a newspaper editor."

Given the current political climate and the partisan nature of both politics and law in the US, we can't assume that judges would be ready to defend the status quo. It's likely that, while the system has numerous flaws and allows bad actors to flourish, the alternative could be much worse.