OpenAI executive departs amidst worries about prioritizing profit over safety.
Jan Leike, who recently stepped down from his position leading OpenAI's "superalignment" team, expressed disagreement with the company's leadership priorities. According to Leike, he had reached a "breaking point."
The terms "alignment" or "superalignment" are used in the AI field to describe the process of training AI systems to work within human needs and values. Leike became a part of OpenAI in 2021, and in summer, the company stated he would co-lead the Superalignment team, focusing on "scientific and technical breakthroughs to guide and control AI systems more intelligent than us."
However, Leike claimed that the past few months had been challenging for the team, lacking resources and "sailing against the wind." He shared on X that compute resources were often a problem, hindering important research. On Thursday, he said, would be his last day at the organization.
"Developing AI systems more intelligent than humans is a perilous enterprise... But over the past years, safety culture and processes have been eclipsed by flashy products," Leike revealed on X.
Leike's departure, announced earlier this week, follows a larger change in OpenAI's leadership. Ilya Sutskever, OpenAI's Co-Founder and Chief Scientist and fellow superalignment team leader, announced his exit the previous day to focus on a project "deeply personal to me."
Sutskever's decision was noteworthy as he played a crucial role in the dismissal and subsequent reinstatement of OpenAI CEO Sam Altman last year. Reports suggest that Sutskever worried Altman was pushing AI technology excessively fast, but days after Altman's ouster, he changed his mind and signed a petition urging the board to step down and welcoming Altman back.
Despite this, it appears that debates over when, how, and how rapidly to develop and freely release AI technology may have continued creating tension within the company since Altman regained control. The recent resignations come as OpenAI announced it would provide the public with access to its most powerful AI model, GPT-4, through ChatGPT. GPT-4 will allow real-time spoken conversations, making ChatGPT more of a digital assistant.
"I believe much more of our time should be invested in preparing for the upcoming generations of models, in areas like security, monitoring, preparedness, safety, error robustness, (super)alignment, confidentiality, societal influence, and related subjects," Leike asserted in his Friday post on X. "These concerns are quite challenging to address, and I'm concerned we're not heading in the right direction."
OpenAI commented on Leike's allegations, referring CNN to an X post from Altman emphasizing the company's dedication to safety.
"I'm deeply grateful for @janleike's contributions to OpenAI's alignment research and safety culture, and very disappointed to see him go," Altman posted. "He's right we have a lot more to do; we are committed to doing it. I'll have a longer post in the next couple of days."
-Contributions from CNN's Samantha Delouya.
Read also:
- Telefónica targets market launch for hologram telephony
- vzbv: Internet companies continue to cheat despite ban
- Telefónica targets market launch for hologram telephony in 2026
- AI and climate in schools: how to keep lessons up to date
In light of these concerns, it's essential for tech companies like OpenAI to prioritize safety measures in their AI development, ensuring that business objectives do not overshadow these essential safeguards. Despite the departure of key personnel, such as Jan Leike and Ilya Sutskever, the company has committed to investing more time and resources into areas like security, monitoring, and safety, demonstrating their continued commitment to responsible AI development.
Source: edition.cnn.com