A robust public debate is currently underway about the responsibility of online platforms. We have long called for this discussion, but only recently has it been seriously taken up by legislators and the public. The debate begins with a basic question: should platforms should be responsible for user-generated content? If so, under what circumstances? What exactly would such responsibility look like? Under consideration is Section 230 of the Communications Decency Act—a provision originally designed to encourage tech companies to clean up “offensive” online content. The public discourse around Section 230, however, is riddled with misconceptions. As an initial matter, many people who opine about the law are unfamiliar with its history, text, and application. This lack of knowledge impairs thoughtful evaluation of the law’s goals and how well they have been achieved. Accordingly, Part I of this Article sets the stage with a description of Section 230—its legislative history and purpose, its interpretation in the courts, and the problems that current judicial interpretation raises. A second, and related, major source of misunderstanding is the conflation of Section 230 and the First Amendment. Part II details how this conflation distorts discussion in three ways: it assumes all Internet activity is protected speech; it treats private actors as though they were government actors; and it presumes that regulation will inevitably result in less speech. These distortions must be addressed in order to pave the way for clear-eyed policy reform. Part III offers potential solutions to help Section 230 achieve its legitimate goals.

Citation
Danielle Citron & Mary Anne Franks, The Internet As a Speech Conversion Machine and Other Myths Confounding Section 230 Reform Efforts, 2020 University of Chicago Legal Forum 45 (2020).