Section 230, Thirty Years On
In the 1990s, there was a shady securities broker called Stratton Oakmont, perhaps best remembered for inspiring Scorsese’s The Wolf of Wall Street (2013). In late 1994, someone made an anonymous post on a Prodigy bulletin board, which was then novel technology, alleging malfeasance on the part of Stratton Oakmont and its principals (if you’ve seen the film, you know this was the genre of statement that was true, though I’m not familiar with precisely what was alleged).
Stratton Oakmont sued for defamation. They didn’t have the anonymous poster’s identity; they sued Prodigy as the publisher of the post instead. While an earlier precedent, Cubby v. CompuServe, had held CompuServe not liable as a publisher of bulletin board posts, classing it as a mere distributor, Stratton Oakmont distinguished its facts on the basis that CompuServe’s bulletin board had been unmoderated, while Prodigy’s had content guidelines for users enforced by moderation. This was an exercise of editorial control, they argued, and Prodigy was a publisher with liability because of this.
The New York courts agreed with this argument, holding Prodigy liable as publisher in 1995, and Congress took note. In order to overturn the Stratton Oakmont precedent, Congress included provisions in the Communications Decency Act of 1996 enshrining a CompuServe-like regime, even if a provider engaged in moderation as Prodigy had. While the Supreme Court later overturned the “decency” portions of the CDA in Reno v. ACLU over First Amendment issues, these provisions, now 47 USC §230 (“Section 230”), remained in place.
Thirty years later, section 230 is an occasional political flashpoint. Platforms where members of the public can post things online are no longer novel technology, but are mature and commonplace. Courts have interpreted section 230’s provisions expansively in cases such as Zeran v. AOL, which is not entirely unreasonable of them; the statute is broad. The operative language follows:
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
The defined terms it references are “interactive computer service” and “information content provider”:
(2) Interactive computer service
The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
(3) Information content provider
The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.
(No one ever said Congress was good at coining pithy terms.)
It seems pretty clear that platforms can easily meet the definition of “interactive computer service”, though it was also clearly drafted in the 1990s to refer to systems like CompuServe, GEnie, or Prodigy; the split between generalist “Internet service provider” and specific discussion platform host wasn’t fully complete yet. This means, under paragraph (c)(1)’s broad language, they enjoy immunity from publisher liability with respect to “information provided by another information content provider”, which includes at least the unedited text of user posts as in CompuServe.
Under (c)(2), platforms enjoy additional freedom from liability in cases where they do engage in moderation, like Prodigy did. The nebulous language in (c)(2) such as “good faith” and “otherwise objectionable” invites a broad interpretation from courts, and courts have thus interpreted, though it wasn’t necessarily inevitable that they did so. There is a legal principle called ejusdem generis regarding lists of the general form “X, Y, Z, or other things”, like (c)(2)(A)’s “obscene, lewd, lascivious, […] or otherwise objectionable” which holds that the catchall at the end needs to be interpreted in light of the specific items in the list. So one is on much firmer ground moderating against porn, spam, and harassment than against something more arbitrarily chosen that one objects to. But this kind of drafting does bear the risk of very expansive interpretation by courts.
This expansiveness has led to unpleasant rulings such as 2009’s Barnes v. Yahoo, where Yahoo promised to remove harassing content, failed to do so, and was ruled immune under section 230 for this failure, which is quite the reversal if one reads the statute as attempting to carve out space for the removal of such content. But this isn’t the farthest courts have taken section 230 immunity.
In today’s environment, platforms employ different sorts of practices from the simple content guidelines and moderation which Prodigy had in the 90s. Most controversial is what’s often called “the algorithm”, i.e., the often opaque methodologies used by platforms to decide which content to surface to users. If simple chronological order is available at all, it generally isn’t the default view.
Courts have ruled that the use of such algorithmic feeds does not invite liability, e.g. in 2019’s Force v. Facebook. Some judges, such as Katzman in his dissent in Force and Justice Thomas in his statement respecting denial of certiorari in Malwarebytes v. Enigma, have been troubled by what they view as the protection of companies who are arguably themselves the “information content provider”: the individual postings may come from users, but the editorialization, e.g. ordering them in a feed for the platform’s own reasons, comes from the platform.
In Malwarebytes, Malwarebytes had engaged in such egregious conduct (the “otherwise objectionable” material it was restricting access to was the software of its direct competitor!) that lower courts finally found a section 230 argument they weren’t willing to countenance; Justice Thomas expressed worry over the breadth of immunity otherwise granted in his statement.
As online platforms have become the new public square, some have also been alleged to engage in censorship of user content with disfavored viewpoints, an exercise of editorial control categorically different from the enforcement of viewpoint-neutral content guidelines to e.g. remove porn or spam, but one that nonetheless fits inside expansive interpretations of section 230’s language, as in e.g. 2020’s Domen v. Vimeo holding that platforms have broad discretion as to what content is “otherwise objectionable”.
This issue was most responsible for bringing section 230 into the public discussion around 2020, and like many cases where an area of policy or law becomes a political flashpoint, discussion of the issue became quite muddled and confused. I’m particularly incensed by Mike Masnick’s piece on Techdirt entitled Hello! You’ve Been Referred Here Because You’re Wrong About Section 230 Of The Communications Decency Act (the title is a good preview of the article’s smug tone, which never lets up), though it’s not the only offender. In some ways, it’s my inspiration for writing up this piece; I feel that my readers can learn about and understand section 230 jurisprudence and the underlying issues without being talked down to by a polemicist relying on confusing “is” with “ought”.
There are a few layers to this: what the statute says (it’s short, you can read it), what jurisprudence says (briefly summarized above), and what public policy should be. It’s also legitimate to dispute the courts’ interpretation of the law; your opinion may not become law if you’re not an appellate judge, but you may believe that “otherwise objectionable” should be interpreted ejusdem generis in the context of “obscene, lewd, lascivious, […]”.
It’s also legitimate to question whether public policy should carve out this sort of immunity at all. I think it should, but the expansive interpretation leaves me uneasy (and I’m in the company of some people who are appellate judges). I don’t think public policy should confer a special immunity to liability simply because a platform is electronic, or because it handles primarily user posts. But Stratton Oakmont shouldn’t have been able to prevail against Prodigy (even if it had been a perfectly upstanding firm). Section 230 serves a useful purpose in protecting neutral online public squares.
The difficult open question is where and how to draw a line between the types of conduct which should be granted a liability shield (e.g. deleting porn, spam, and harassment), and those which should not (which may include algorithmic feeds or viewpoint discrimination). Especially in the Loper Bright era, the law needs clear drafting with easily understandable bright lines and safe harbors. I’ll follow up with more thoughts on the matter, but I’m confident 1996’s first attempt can be improved on.
The Blackboard is a publication about policy and technology, not particularly weighted towards either. One can think of engineering as where theory is turned into practice, and policy as where practice is turned into results. In this sense, I’m an engineer writing about engineering and policy.
If this seems interesting, or if you want to read about topics like
what computer operating systems need to look like in a networked world,
how a truly American intellectual property system could be constructed,
a take on monetary systems you almost certainly haven’t heard before,
and more, then feel free to subscribe below. (You don’t need to pay; I do not anticipate paywalling articles, only the comment section. But if you do, I’ll be pushed towards writing for the public benefit.)