Professor Danielle K. Citron has been working to stamp out how internet companies profit from destructive activity — like so-called “revenge porn” or cyberstalking — for more than a decade.

The University of Virginia School of Law professor is now advising lawmakers on how to reform Section 230 of the Communications Decency Act of 1996, which has been used as a shield for internet companies that might otherwise face legal liability for user content.

Section 230 was designed to offer leeway to internet companies to regulate themselves and take down offensive content, but it also has shielded them from civil liability for content published by users. For example, Facebook can either allow (and theoretically profit from) posts purveying false or offensive information, or it can block them. On the extreme end, revenge porn websites face no real repercussions for profiting from abusive content.

Citron recently contributed to a bill proposed by Sens. Mark Warner, Mazie Hirano and Amy Klobuchar, called the SAFE TECH Act. Among the bill’s provisions, online platforms would no longer be able to claim Section 230 immunity for alleged violations of federal or state civil rights laws, antitrust laws, cyberstalking laws, human rights laws or civil actions regarding a wrongful death.

Citron, the Jefferson Scholars Foundation Schenck Distinguished Professor in Law at UVA, writes and teaches about privacy, free expression and civil rights. In 2019 she was named a MacArthur Fellow based on her work on cyberstalking and intimate privacy. She answered our questions about Section 230 and the future of speech on the internet.

What does Section 230 get right or wrong?

Section 230’s goal — a wise one — is to incentivize online service providers to moderate online activity themselves. They are in the best position to deal with illegality and to minimize harm to individuals, firms and society at large. When it was passed in 1996, the law’s drafters, then-Congressman Ron Wyden and then-Congressman Chris Cox, knew that federal agencies could not keep up with all of the illegality online and certainly could not address protected but “offensive” (their words, not mine) speech. They gave them a legal shield if they engaged in “private blocking and filtering of offensive material.” So far, so good. The statute essentially said that interactive computer services could over- or under-filter information provided by users and still enjoy that immunity. Platforms would not be treated as “publishers” or “speakers” of someone else’s material and they would not be responsible for blocking or filtering someone else’s material so long as they did it in good faith. The legal shield has been valuable — it enables social media and other content platforms to take down, block or filter all sorts of troubling content, including spam, impersonation, nonconsensual nudity, threats etc. so long as they do it in good faith. It also lets them try to combat troubling material but do so incompletely — that is the protection for under-filtering or blocking information provided by someone else.

The problem, by my lights, is how the legal shield operates as to under-filtering/blocking — when platforms do too little moderating, or worse. What Section 230 gets wrong is the provision dealing with when providers fail to address illegality, and worse, encourage illegality. Right now, the provision dealing with under-filtering is not conditioned on anything at all, it is a free pass, so sites can encourage illegality and make money off it and still enjoy the immunity. That is why so-called revenge porn sites are thriving, earning ad revenue from likes, clicks, and shares of eager and growing audiences. They can earn ad income from visitors and monetize their personal data and still bear no responsibility for the harm that they have encouraged and solicited, like nonconsensual porn or deepfake sex videos. [Deepfakes are manipulated video or other digital content that falsely depict an individual in a realistic looking and/or sounding way.]

How did you get involved in efforts to reform Section 230?

I’ve been working with staff and members on both sides of the aisle in the House and Senate for a number of years now. I first called for Section 230 reform in 2008 — gosh, that was not popular, to say the least. Many said that my call for reform would break the internet. Then, as now, I argued that platforms should have a duty of care, that they should earn the immunity. In my 2014 book, I stepped back a bit and argued more narrowly that sites that solicit or primarily facilitate cyberstalking or nonconsensual porn should not enjoy the immunity from liability. I returned to my broader approach in a Fordham Law Review article published in 2017 where [Brookings Institute Senior Fellow and journalist] Benjamin Wittes and I argued (and provided specific statutory language) that the under-filtering provision (230(c)(1)) should be conditioned on reasonable content moderation practices in the face of clear illegality causing serious harm.

After the 2016 election, it became clear that social media platforms were not doing a good-enough job dealing with destructive disinformation and hate speech (much of that involved legally protected speech, but still was bad for democracy and society). Congressional staff started to reach out to talk about my proposals. That talk heated up with the advent of deepfakes. In 2019, I testified before the House Intelligence Committee about concerns related to deepfakes and Section 230, and then before the House Commerce Committee about Section 230 generally. It was really then that I began working in earnest with staffers about the problem of under-filtering and how we could incentivize companies to do that work without losing what has been important about Section 230 — incentivizing content moderation — and the activity and speech the legal shield has allowed to flourish.

In particular, I started working with Senator Warner’s office on behalf of the Cyber Civil Rights Initiative, where I am the vice president, and his amazing tech policy staffer Rafi Martina, who is a graduate of UVA Law. [Martina ’10 is Warner’s lead adviser on technology and cybersecurity policy.] I have also been working with other senators and House members on proposals with distinct goals, including exempting civil rights laws from Section 230 immunity, and with broader goals like my reasonableness proposal with Wittes.

Both political parties are involved in reform efforts. Why has this issue drawn so many together now?

Conservatives are unhappy with Section 230 because they think it has led to the removal of too much speech — their own — but the facts don’t support that complaint, at least empirical research does not suggest that conservative voices are being unfairly removed or filtered from platforms. Liberals are unhappy with Section 230 because they think that there is too little filtering going on of troubling activity like extremist speech, hate speech and bullying. In my view, the liberals have the issue right (under-filtering) but my concerns focus on harmful activity that is illegal and that thrives online thanks to the fact that Section 230 does not condition the immunity on anything in the case of under-filtering. (It does say that over-filtering must be done in good faith.)

What are one or two examples where you think websites should have been liable?

There are a number of cases where sites have been immunized from liability even though their business model is illegality. Revenge porn and deepfake sex video sites solicit invasions of intimate privacy and they enjoy the immunity. They do, and they should not. 

What would critics say is the downside of limiting immunity, and how would you respond?

Critics say that my proposal would lead to over-caution, that reasonableness is too vague, and that the move would generate costs borne by platforms. But the status quo already has costs to speech that the critics often don’t acknowledge — women and minorities who face harassment and invasions of intimate privacy online are silenced, and they withdraw from online life. So, dealing with the problem of sites that lack reasonable content moderation practices would mean that victims might stay online, that their lives would not be ruined. If we are going to talk about costs to expression, we need to include the voices silenced by online abuse into the calculus. Reasonableness is not too vague. Like so many areas of the law, reasonableness is how we figure out standards of care. There is an entire industry — trust and safety — with its own professional organization. I have worked with trust and safety folks at companies for the past 10 years. Courts can figure out what is reasonable in the face of particular kinds of illegality that causes harm, just as they do in other areas of the law, from tort and data security to criminal procedure.

It’s true that smaller platforms may not have the resources that the dominant platforms do and that they would have a harder time internalizing the costs of showing courts that they engaged in reasonable content moderation practices in the face of clear illegality. Reasonableness would take into account the platform’s business and audience, but still the question remains as to whether a small platform would even bother litigating the question. Lawmakers could apply the condition only to sites with a certain amount of capitalization or subscribers, but that leaves us with the same problem that we can’t ignore: new entrants can solicit illegality or allow it with a wink and a nod and cause grave harm. We need to consider that harm and address it.

This seems to be part of a broader rethinking of the contours of the First Amendment in the United States in light of digital media. Will we always be playing catch-up to technology?

Yes, the scale, virality and reach of social media tools and platforms change the stakes of speech in ways that challenge our thinking about free speech commitments. As [author and researcher] Renée DiResta often says, there is freedom of speech but not freedom of reach (which is what social media supplies and makes money from). We may have to rethink how we assess medical disinformation and other harm causing disinformation that does not fit neatly into statutory or doctrinal boxes. What is amazing is being colleagues with scholars like Leslie Kendrick and Fred Schauer, who have pressed us to think hard about free speech expansionism and free speech’s magnetism. I am excited to get working with them on these issues and others in this tech policy space.

Founded in 1819, the University of Virginia School of Law is the second-oldest continuously operating law school in the nation. Consistently ranked among the top law schools, Virginia is a world-renowned training ground for distinguished lawyers and public servants, instilling in them a commitment to leadership, integrity and community service.

Media Contact