Skip to content

What you need to know about the Q* super AI

"Threat to humanity"

Although there is almost no reliable information about Q* so far, many in the community are already....aussiedlerbote.de
Although there is almost no reliable information about Q* so far, many in the community are already declaring the new AI model to be the "greatest breakthrough in human civilization"..aussiedlerbote.de

What you need to know about the Q* super AI

The warning of a potentially human-endangering development in artificial intelligence allegedly played an important role in the dismissal of Sam Altman as head of the chatGPT provider OpenAI. What is behind the alleged superintelligence Q*?

Until a few days ago, so-called superintelligence was almost a pipe dream. Many people were amazed at what artificial intelligence can already do and the speed at which software developers were releasing new and better programs. However, even many experts could not imagine that AI would be smarter than humans, at least not yet. It would still take years to develop a superintelligence, was often heard in specialist circles. But now there is speculation that an important breakthrough may have already been achieved.

The reason for this is the new project by ChatGPT inventor OpenAI called Q* ("Q-Star"). The model is supposed to be able to independently solve mathematical problems that it did not know before - experts believe this would be a milestone in the direction of "Artificial General Intelligence", or AGI for short, colloquially known as superintelligence.

As reported by the news agency Reuters and the magazine "The Information", Q* is also said to have played a role in the dismissal of the now reinstated CEO and OpenAI co-founder Sam Altman. According to the two sources, a test version of the model, which was probably circulating within OpenAI, alarmed security experts. An internal letter to staff apparently warned that the development of Q* could pose a "threat to humanity".

"Nobody knows exactly what it is"

But what can the program do that has triggered such a surge of fear in circles at the software company? "Nobody knows exactly what it is," says Damian Borth, Academic Director of the Doctoral Program in Computer Science at the University of St. Gallen. "There is no blog post or paper that has been published. There is only conjecture and that's the interesting thing." Like many others in the community, he suspects that the "Q" in the name is a reference to so-called Q-learning. This is an algorithm from reinforcement learning, a method of machine learning. Put simply, programs interact with their environment, make decisions and receive a reward for a positive action. This encourages it (reinforcement) and it carries out the action more often, and vice versa for negative actions.

Others in the OpenAI online community, however, suspect that quantum computing is behind the project's code name. Quantum computers are extremely powerful and can solve specific complex problems with many variables faster than conventional computers. However, Borth believes this is unlikely. "OpenAI hasn't done much in this area, but has clearly focused on GPUs, i.e. graphics processors," he says. "In reinforcement learning, on the other hand, OpenAI has always been very strong. Alongside generative AI, which includes ChatGPT, this is one of the central pillars."

Behind the asterisk of Q*, the community suspects the "A*" algorithm, which can determine the shortest path between two nodes or points. To do this, it does not blindly select the next available node, but instead uses additional information to speed up the search.

Users openly express skepticism

Although there is almost no reliable information about Q* so far, many in the community are already declaring the new AI model to be the "greatest breakthrough in human civilization", a "revolution" and a "groundbreaking" system. Big words for the fact that, according to Reuters and The Information, Q* can only solve math problems at primary school level.

Some users therefore also openly express skepticism: "As someone who has done a lot of research into AI, I can say that it is very easy to believe that a breakthrough has been achieved," writes one. Another writes that "human or super-human intelligence" needs a "different architecture". "Q* is a movement in that direction, but it's not at all clear if it's "that"," writes a user at OpenAI.

In fact, the special thing about Q* is apparently that it can solve mathematical tasks independently. "As far as we know so far, this is the first time that AI has managed to achieve the kind of intellectual performance required for mathematics," says Borth. "So the machine is not just parroting, as skeptics at ChatGPT say, but Q* is supposed to have the ability to draw logical conclusions." Whether this is also a decisive step towards AGI, however, cannot yet be said.

"For one thing, the definition of AGI is not entirely clear. Is it a machine that is aware of itself, that works against humans or that simply generalizes across several tasks?" says Borth. "On the other hand, in my view, AGI is not even necessary to be dangerous to humans. Depending on how we handle our current systems, that could already happen."

Altman is considered the face of the AI boom

The unease also stems from the fact that the company itself has allegedly warned of this. Security experts are said to have been particularly alarmed by the pace of development, reports The Information.

Altman, who is considered the face of the AI boom and is said to have had the goal of teaching computers to learn independently from the outset, commented on the potential risks of AI at a hearing in the US Senate this year: "My worst fears are that we are doing significant damage to the technology and the industry. [...] I think if this technology goes wrong, it can go very wrong. And we want to be vocal about that," said Altman, who is now CEO of OpenAI again after an unprecedented back-and-forth.

The board had initially dismissed Altman almost two weeks ago without giving reasons and twice appointed an interim CEO. Last Wednesday, however, the pressure from major investor Microsoft became too great and Altman returned to his post. At the same time, a new board of directors was appointed, including former US Treasury Secretary Larry Summers. According to Sarah Kreps, Director of the Tech Policy Institute in Washington, the new board supports Altman's vision of accelerating the development of AI while at the same time taking safety precautions.

This article first appeared on Capital.de.

After the alleged human-endangering potential of Q, the advanced AI project by ChatGPT creator OpenAI, led to Sam Altman's dismissal as CEO, there's a growing interest in understanding the capabilities of this AI model. Despite the lack of detailed information about Q, many experts speculate that it could be a significant step towards Artificial General Intelligence (AGI), also known as superintelligence.

Recent reports suggest that a test version of Q* alarmed security experts within OpenAI due to its capability of independently solving primary school-level mathematical problems, which is a promising development in the realm of AGI. This ability to draw logical conclusions and not just parrot information has raised concerns and excitement among AI enthusiasts, as it represents a significant advancement in AI's problem-solving capabilities.

Source: www.ntv.de

Comments

Latest