After actress Scarlett Johansson turned down OpenAI’s request to use her voice in their latest version of ChatGPT, an artificial intelligence chatbot, the company seemingly forged ahead and used it anyway.

Dotan Oliar
Dotan Oliar

When Johansson complained, OpenAI paused “Sky,” one of five possible voice personas available to users to engage with ChatGPT. The controversy is just one among a wave of new disputes over how artificial intelligence absorbs and uses human likenesses and work. In another example, a cohort of bestselling authors, including John Grisham, David Baldacci ’86 and “Game of Thrones” author George R.R. Martin, recently filed a class-action lawsuit against OpenAI for using their books to train ChatGPT — enabling others to spin out similar novels.

University of Virginia School of Law professor Dotan Oliar, an expert in intellectual property who teaches art law and copyright, examines the OpenAI controversies and how artificial intelligence is raising new legal questions and reviving old debates.

What recourse does an actor have in Johansson’s situation, where Open AI claimed to use a similar voice, but not her actual voice?

The “right of publicity” is the relevant body of intellectual property law available to celebrities who believe their voice (or image, or likeness) was misappropriated. This is a state — rather than federal — cause of action [an avenue for lawsuits in courts], and a right now protected in the majority of states, although the scope of protection is not the same everywhere. In a similar case from 1988, the Ford Motor Co. wanted to use Bette Midler’s voice for a commercial, and just like reportedly happened here, Midler declined. Ford went and hired a “sound-alike” person, and in a then-precedential ruling, the Ninth Circuit decided in Midler’s favor and held that a person’s voice was a protected attribute within their right of publicity.

Johansson called for regulations to protect actors’ likeness and individual people’s rights in the AI era. What could the pathway to regulation look like?

This is really up to Congress. A bill by the name of No Fakes Act of 2023 was recently introduced to grant federal protection — not only against commercial exploitation, but also to protect privacy interests in light of deepfakes that may cause not just monetary harm, but also personal and emotional harm. As mentioned, the right is already protected in the vast majority of states by a web of “image, name, likeness” common and statutory laws.

There was an outcry when Kanye West’s “Famous” music video featured nude wax figures of celebrities, which were later exhibited as sculptures. How do we draw the line legally between what is art and what violates privacy?

The music video and sculptures also raise the right of publicity question. One defining line between “art” and right of publicity misappropriation is the commercial nature of the use, which is a condition for the tort. The right of publicity cause of action is generally limited to commercial uses of a celebrity's name/image/likeness/voice. Another distinguishing element is whether there is something artistic or transformative in the use of the name/image/likeness/voice that would enjoy First Amendment protection of expressive speech, and here what’s relevant is how close the AI version was to Johansson’s real voice.

AI is also affecting other creative mediums, such as authorship. How would you weigh the claims of the authors that are suing ChatGPT for absorbing their works? Can authors protect their work against AI companies?

This is a currently pending hot question, about the intersection of AI and copyright law. There are a few dozen pending cases on point. It is not clear if and under which circumstances training machines on copyrighted works violates authors’ rights. Some scholars believe that even if it infringes, such training should enjoy the affirmative defense of “fair use,” but I think everybody would agree that this cannot be an automatic exemption, as sometimes AI spits out identical or nearly identical copies of one of the works trained on, a phenomenon called “regurgitation."

Johansson’s case, centering on the right of publicity, might be a close parallel to cases arising under copyright law. Open AI denies that it tried to imitate her voice, something that others doubt. However, it is entirely possible that Johansson’s voice was one of many voices in the training set Open AI used, yet Open AI’s model “regurgitated” her voice. As the law on the intersection of AI and different bodies of IP is far from clear, at least in the context of copyright law, a practice is now developing where owners of datasets of copyrighted works license the works to companies that are developing generative AI systems, thus avoiding legal risk.

Are there any basic steps you recommend to protect your likeness and work?

As in other areas of intellectual property, policing is key. People have many reasons to protect themselves and their copyrighted works, whether it’s to prevent deepfakes or counter AI “hallucinations” — made-up content about a person that may harm their honor or reputation.

Anyone wishing to protect their informational assets — whether it’s for personal reasons or for monetary gain — should constantly be on the lookout for infringement or misappropriation, and police their IP, such as by complaining or sending cease-and-desist letters. As in Johansson’s case, it seems that after she complained, Open AI stopped using “her” voice. Another route available to copyrighted works is to register them with the U.S. Copyright Office, which, although optional, grants an extra layer of protection.

Founded in 1819, the University of Virginia School of Law is the second-oldest continuously operating law school in the nation. Consistently ranked among the top law schools, Virginia is a world-renowned training ground for distinguished lawyers and public servants, instilling in them a commitment to leadership, integrity and community service.

Media Contact