OpenAI temporarily halts production of ChatGPT voice resembling Scarlett Johansson from film 'Her'
An online community has criticized a new AI voice created by OpenAI, dubbed "Sky," following a backlash to what they perceived as its overly familiar and flirtatious tone. Critics have likened the voice to a female developer's fantasy, and it has been subjected to widespread mockery. OpenAI has addressed these concerns, promising to pause the use of Sky while they investigate.
OpenAI has clarified that the voice is not derived from actress Scarlett Johansson, as some speculated, but rather from a different actress using her natural speaking voice. The company aims to create an approachable voice that inspires trust, with a rich tone and an easy-to-listen-to quality. The ChatGPT voice mode featuring Sky had not yet been widely released, but a video showcasing the AI at a product announcement and interviews featuring OpenAI employees using it gained significant attention online.
Criticism surrounding Sky's voice is reflective of broader concerns about the biases inherent in technology created by predominantly white male-led or financed tech firms. The unveiling of Sky followed the departure of two high-ranking OpenAI members who raised questions about safety protocols and resource allocation for preparing for the potential emergence of artificial general intelligence (AGI) that could outsmart humans.
OpenAI CEO Sam Altman initially responded to these concerns, thanking former employee Jan Leike for his commitment to safety matters and acknowledging the need for further action. The company has also clarified that it has dissolved the team previously led by Leike and incorporated its members into other research departments to strengthen the implementation of safety protocols. In a lengthy blog post signed by both Altman and OpenAI President Greg Brockman, the company detailed its approach to long-term AI safety measures, including raising awareness of AGI risks and engaging in international governance discussions. They also emphasize the importance of a feedback loop, regular testing, consideration of potential risks, security measures, and compatibility between safety and capabilities as AI becomes increasingly integrated into our everyday lives.
Wider concerns about AI safety
The controversy surrounding Sky points to the broader societal issues that arise around the development of AI within a largely white male-led tech industry.
In a social media post last week, Jan Leike, previously the head of the team responsible for long-term AI safety at OpenAI, claimed that the company's safety culture and processes had become secondary to "shiny products." He also raised concerns regarding the lack of resources dedicated to AGI preparation. altman, the CEO, acknowledged Leike's concerns and expressed commitment to addressing them.
OpenAI began reorganizing teams recently, with members of Leike's team being integrated into various research divisions. A spokesperson affirmed that this restructuring would enhance the company's ability to meet safety-related objectives.
In a lengthy post on Saturday signed by both himself and Altman, OpenAI President Greg Brockman shared the company's approach to ensuring AI safety, highlighting their awareness and advocacy for AGI risks, research into potential consequences of scaling up deep learning models, and efforts to assess AI systems for catastrophic outcomes. He added that as technology becomes more integrated with human life, the primary focus will be on maintaining a tight feedback loop, rigorous testing, close attention to potential dangers, robust security, and striving for harmony between safety and capabilities.
Read also:
- Telefónica targets market launch for hologram telephony
- vzbv: Internet companies continue to cheat despite ban
- Telefónica targets market launch for hologram telephony in 2026
- AI and climate in schools: how to keep lessons up to date
In the context of the tech industry, concerns about the biases in AI voices created by predominantly white male-led firms have been raised. This was evident with the backlash against OpenAI's new voice, Sky, which critics perceived as flirtatious and fantastical.
Despite the temporary pause on using Sky, companies like OpenAI continue to strive for developing an approachable and trustworthy AI voice, recognizing the importance of safety protocols and long-term AI safety measures in the business of tech innovation.
Source: edition.cnn.com