Skip to content

Attempted to displace OpenAI's CEO, now launching a competitor emphasizing safety.

Departed co-founder of OpenAI unveils new endeavor: a business focused on crafting secure, potent AI solutions.

Ilya Sutskever, Russian Israeli-Canadian computer scientist and former chief scientist at OpenAI,...
Ilya Sutskever, Russian Israeli-Canadian computer scientist and former chief scientist at OpenAI, announced he's starting a new company dedicated to "safe superintelligence."

Attempted to displace OpenAI's CEO, now launching a competitor emphasizing safety.

On X Wednesday, Ilya Sutskever unveiled plans for a fresh venture dubbed Safe Superintelligence Inc. (SSI). In a statement on their website, the company declared:

"SSI is our mission, our identity, and our entire product strategy, as it's our main focus. Our team, investors, and business model are all in sync to achieve SSI. Our goal is to boost capabilities swiftly while ensuring safety takes the lead. This way, we can expand without any worries."

This announcement arrives as concerns escalate in the tech community and beyond, questioning if AI is advancing at a rate faster than safety and ethical research, with minimal regulation to guide tech companies on responsible usage.

Known as one of the early figures in the AI revolution, Sutskever worked under AI pioneer Geoffrey Hinton during his student years and co-created an AI startup that was later bought by Google. Following a stint on Google's AI research team, Sutskever helped establish the entity behind ChatGPT.

However, things became tumultuous for Sutskever at OpenAI when he was associated with an attempt to dismiss CEO Sam Altman last year, leading to a dramatic shakeup of leadership that included Altman's firing, rehiring, and a board Overhaul within a week.

Sources such as CNN contributor Kara Swisher previously reported that Sutskever felt Altman was pushing AI technology too aggressively. However, following Altman's removal, Sutskever expressed regret for his involvement and subsequently signed a letter supporting the entire board's resignation and Altman's return.

In March, Sutskever announced his exit from OpenAI – one of several departures around the same time – to focus on a project of personal significance.

Safe Superintelligence's objectives remain unclear, as it's undetermined how they plan to monetize a "safer" AI model or the specific form their technology will take in terms of products. Additionally, it's uncertain what their definition of "safety" is in relation to highly intelligent artificial intelligence technology.

In an interview with Bloomberg, Sutskever stated: "By safe, we mean safe like nuclear safety as opposed to safe as in 'trust and safety'."

Employees who departed OpenAI expressed concerns about the company prioritizing commercial growth over long-term safety investment. One of these former employees, Jan Leike, raised concerns in March about OpenAI's choice to disband their "superalignment" team, which was dedicated to training AI systems to adhere to human values and objectives. (OpenAI, however, claimed that superalignment team members were being distributed throughout the company to better address safety matters.)

In their launch announcement, Safe Superintelligence signifies a desire to chart a distinct course: "Our concentrated focus ensures no management hassles or product cycles, and our business model ensures that safety, security, and progress are all sheltered from short-term commercial pressures."

Joining Sutskever in this new venture are Daniel Levy, who served at OpenAI for the last two years, and Daniel Gross, an investor who previously worked as a partner at startup accelerator Y Combinator and in machine learning projects at Apple. According to the company, they will establish offices in Palo Alto, California, and Tel Aviv, Israel.

Read also:

In alignment with their mission, Safe Superintelligence Inc. is actively seeking partnerships with tech companies to ensure their safety standards are integrated into their AI systems. As the venture progresses, they aim to establish themselves as a significant player in the tech business, focusing on the safe development and implementation of advanced AI technology.

Comments

Latest

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria The Augsburg District Attorney's Office is currently investigating several staff members of the Augsburg-Gablingen prison (JVA) on allegations of severe prisoner mistreatment. The focus of the investigation is on claims of bodily harm in the workplace. It's

Members Public