Skip to content

Current activities of AI specialist Sutskever revealed.

Heading towards Superior Artificial Intelligence

Has big plans: Ilya Sutskever.
Has big plans: Ilya Sutskever.

Current activities of AI specialist Sutskever revealed.

AI maverick Ilya Sutskever, known for his pioneering work in the field and co-credited for generating the AI frenzy with Alexnet in 2012, is launching a fresh venture, Safe Superintelligence (SSI), to guarantee the safety of AI technology.

At the age of 37, Sutskever, who recently celebrated his birthday, was one of the brains behind OpenAI's inception in 2015. His departure from OpenAI in mid-May fuelled speculation about his future plans. Now, the AI whiz has disclosed some insights about his forthcoming project: Sutskever, alongside his ex-OpenAI comrade Daniel Levy and ex-Apple manager Daniel Gross, is establishing Safe Superintelligence (SSI). The primary target: cultivating a safe superintelligence.

The perennial issue of AI safety has long troubled Sutskever. He joined OpenAI with the objective to propel AI innovation while simultaneously addressing its safety concerns. The Altman debacle might have been a part of this discord – with Sutskever reportedly growing disillusioned over OpenAI's escalating commercial interests, transforming it into a billion-dollar behemoth. Sutskever has neither confirmed nor denied these accusations publicly, but his OpenAI counterpart and fellow startup collaborator Jan Leike recently shared the following on X: The emphasis on safety had taken a backseat to flashy products.

SSI aspires to rectify this situation. The company will operate in a buffer zone, shielded from external pressures, involved in developing a complex product, and competing in the market. "This company is unique because its first product will be a safe superintelligence, and that's it," Sutskever told Bloomberg.

SSI cannot escape commercial pressures entirely. The most recent language models demand colossal amounts of data and computational power. Sutskever's new venture isn't immune to these economic realities. The identities of SSI's investors remain elusive.

The specifics of what SSI aims to accomplish and develop have yet to be clarified. However, it's apparent that it won't be about tackling the defects and risks of existing AI products, such as privacy concerns, copyright issues, or the lack of factual accuracy as a core competency for tools like ChatGPT.

Sutskever grapples with larger questions, such as how advanced AIs will appear in the future and how they will coexist with humans. In an interview with The Guardian a few months ago, Sutskever expressed apprehension that AI would indeed "solve all the problems we have today," including unemployment, diseases, and poverty, but it would also spawn new challenges: "AI has the potential to create eternal dictatorships." Sutskever references a future phase of AI called Superintelligence, which, according to this perspective, would be more powerful than a General Artificial Intelligence (AGI). It wouldn't only replicate human-like abilities but also capabilities surpassing that.

Such a Superintelligence must possess the quality of not inflicting significant harm on mankind, as Sutskever told Bloomberg. "We want it to operate based on some essential values." Under this, he envisions "perhaps the values that have been so successful over the past few centuries and that underlie liberal democracies, such as freedom and democracy."

Exactly how the founders plan to instill these values into their AI models remains unclear. There's only the statement that there will be colossal supercomputing facilities in the future, which will develop innovative technologies autonomously. "That's insane, isn't it? It's 'our security, to which we want to contribute'."

This text originally appeared at capital.de

Read also:

Sutskever's new venture, Safe Superintelligence (SSI), aims to ensure the safety of advanced AI technologies, such as Superintelligence, which could possess capabilities surpassing General Artificial Intelligence (AGI).

The AI pioneer, Sutskever, has expressed concerns about the potential of Superintelligence to create new challenges, such as the rise of eternal dictatorships, and emphasizes the importance of instilling essential values, like freedom and democracy, into these advanced AI models.

Comments

Latest