The potentiality of Scarlett Johansson suing OpenAI warrants serious concern.
Things might take a turn after Johansson mentioned OpenAI attempted to hire her to voice an AI assistant for ChatGPT and, when she declined, they went with a similar voice instead. If OpenAI's co-founder and CEO, Sam Altman, is involved in this lawsuit, they could face significant consequences.
Legal experts suggest Johansson could have a dense legal claim if she decides to sue, pointing to numerous precedents that may result in serious damages for a prominent AI company like OpenAI and spark questions regarding the sector's readiness to address AI's intricate issues.
Interestingly, openAI seemed unfamiliar with the legal background or willfully dismissive of it, highlighting perceived lax oversight in the AI industry. OpenAI has yet to respond to a request for comment.
OpenAI's Legal Concerns
According to legal experts, two legal areas might come into play, but only one is likely to be relevant based on the known facts.
The initial question could be copyright law. If OpenAI directly sampled Johansson's movies or other published works to create Sky, the friendly voice assistant featured in an upgrade to ChatGPT, they might face a copyright lawsuit if no permission was sought.
However, OpenAI claimed it never employed Johansson's actual voice but instead used "a different professional actress using her own natural speaking voice." While this might alleviate a copyright issue, it probably wouldn't shield OpenAI from the next legal aspect.
Tiffany Li, a law professor focusing on intellectual property and technology at the University of San Francisco, opined: "It doesn't matter if OpenAI used any of Scarlett Johansson's actual voice samples. She still has a valid right of publicity case here."
The Concept of Publicity Rights
Several states have right-of-publicity laws that protect individuals' identities or public personas from being misappropriated, and California's - home to Hollywood and OpenAI - is amongst the most potent.
The law disallows the unauthorized employment of anyone's "name, voice, signature, photograph, or likeness" for the purpose of "advertising or selling, or soliciting purchases of, products, merchandise, goods or services."
Contrasting a copyright claim, which involves intellectual property, a right-of-publicity claim centers around the unauthorized usage of someone's identity or public persona for commercial gain. In this scenario, Johansson could accuse OpenAI of misappropriating her for financial gain via deceiving users into believing she provided the voice for Sky.
OpenAI could try to defend itself by arguing their now-viral videos illustrating Sky's capabilities were not intended as advertisements or aimed at boosting sales. John Bergmayer, legal director at Public Knowledge, a consumer advocacy group, has doubts about such a defense: "I believe use in a highly hyped promo video or presentation easily meets that test."
Unlike the copyright claim, which might not apply due to OpenAI's claims of using a different actress, the right-of-publicity claim is more focused on the unauthorized utilization of somebody's identity or public profile for profit. Here, Johansson could allege OpenAI unlawfully profited from her identity by deceiving users into thinking she voiced Sky.
One potential defense OpenAI may present is that their videos showcasing Sky's abilities were not intended as advertisements. However, this argument might be slender.
Legal Cases with Parallels
Several cases stand out that hint at OpenAI's potential issues.
In 1988, Bette Midler triumphed in a lawsuit against Ford Motor Company over an ad featuring what seemed like her voice. Although Midler had rejected the chance to record the ad, Ford hired a sound-alike instead. The similarities between the reproduction and the original were so marked that some people thought Midler had performed in the commercial.
The US Court of Appeals for the 9th Circuit ruled in Midler's favor, emphasizing, "The defendants asked Midler to sing if her voice was not of value to them. Why did they seek the services of a sound-alike and instruct her to imitate Midler if Midler's voice was not of value to them? What they wanted was an attribute of Midler's identity. Its value was what the market would have paid for Midler to have sung the commercial in person."
In a similarly decided 1992 case by the 9th Circuit, Tom Waits was awarded $2.6 million in damages against Frito-Lay for a Doritos ad employing an imitation of Waits' signature raspy voice. The court in that case reiterated its judgment in Midler, underlining the idea that California's right of publicity law shields a person's voice.
The case between Johansson and OpenAI bears similarities to previous incidents. Johansson claims that OpenAI asked her to portray Sky, a character she refused. Later, OpenAI unveiled a version of Sky that closely resembled Johansson's likeness, causing her to remark that her closest friends couldn't distinguish between her and the AI-generated character.
Whether OpenAI can defend itself against a potential claim for misappropriation may come down to their intentions - if OpenAI can prove they didn't aim to imitate Johansson's voice.
In a recent blog post, OpenAI affirmed that Sky wasn't meant to be an imitation of Johansson. Instead, their objective was to create an engaging voice that established trust and contained a rich tone.
However, OpenAI may have jeopardized their position.
"OpenAI might have had a chance if they hadn't spent the past two weeks insinuating to everyone that they had just created Samantha from 'Her'," explained James Grimmelmann, a law professor at Cornell University. "There was widespread recognition that Sky was Samantha, and it was intentional."
Screenshots circulated on social media of OpenAI's Twitter account, indicating they were comparing Sky to Johansson's character in the 2013 film 'Her'. Altman then admitted on Monday, after Johansson's statement, that they "cast the voice actor behind Sky's voice before any outreach to Ms. Johansson" and apologized for not communicating better.
Despite this, Grimmelmann believes OpenAI may still be at risk. "OpenAI might have had a plausible case if they hadn't spent the last two weeks hinting to everyone that they had just created Samantha from 'Her'," he said. "There was widespread public recognition that Sky was Samantha, and intentionally so."
This controversy demonstrates the potential problems associated with deepfakes and AI. While California's publicity law safeguards everyone, some state laws only protect famous individuals and not all states have these regulations.
What's more, existing laws may protect an individual's image or even voice, but they might not cover all AI-related actions, like prompting a model to recreate art in the style of a well-known artist.
"This situation reveals the necessity for a federal right of publicity law, as not every case will necessarily involve California," said Bergmayer.
Some companies have responded by proposing legislation. Adobe, which develops software including AI tools, is laying out a proposal known as the FAIR Act to establish a federal ban on AI impersonation. They maintain that the tech industry has an interest in ensuring customers can continue to benefit from their original works without risking their economic livelihoods due to AI imitations.
Dana Rao, Adobe's chief trust officer and general counsel, echoed Rao's comments: "The concern you have as a creator is that AI might supplant your economic livelihood because it's training on your work. That's the fear in the community. And what we're saying at Adobe is that we'll always offer the most fantastic technology to our creators, but we also believe in responsible innovation."
Meanwhile, US lawmakers are exploring relevant legislation. In 2021, a bipartisan group of senators unveiled the NO FAKES Act, a draft designed to protect creators. Another proposed bill in the House is called the No AI Fraud Act.
Nonetheless, digital rights advocates and academics cautioned that the emerging legislation potentially leaves significant gray areas.
"The concern is about free expression, like if people can use other individuals' likenesses for educational or other non-commercial purposes," Rothman, an intellectual property expert, said in an October blog post about the NO FAKES Act. "There are also questions about rights to a person's image after they pass away, which is essential for the recreation of deceased performers in movies or music."
These debates underscore the difficulty lawmakers face in navigating AI's complexities.
Read also:
- Telefónica targets market launch for hologram telephony
- vzbv: Internet companies continue to cheat despite ban
- Telefónica targets market launch for hologram telephony in 2026
- AI and climate in schools: how to keep lessons up to date
In light of the legal concerns surrounding OpenAI's use of a voice similar to Scarlett Johansson's, the tech company could face significant business repercussions if found guilty. This potential backlash could deter other tech companies from neglecting publicity rights and intellectual property laws in the future.
As OpenAI's case unfolds, legal experts and industry watchdogs are paying close attention to how the court will address the intersection of technology and publicity rights, shedding light on the evolving landscape of AI regulation.
Source: edition.cnn.com