Skip to content

AI Experts Warn of Potential Risks to Human Existence

Mind the risks

Has ChatGPT chosen "glitzy products" over security? At least that's what developer Jan Leike...
Has ChatGPT chosen "glitzy products" over security? At least that's what developer Jan Leike claims.

AI Experts Warn of Potential Risks to Human Existence

AI technology - a double-edged sword for humankind? Some professionals express their alarm over the hazards of this innovative breakthrough, while others in the field find such alarums excessive.

Noted AI experts have penned a new alarming report on the potential dangers of AI technology. "Failing to exercise caution, we could forever lose command over autonomous AI systems," state the researchers in a recent publication of Science journal. Anticipated perils of AI include massive cyber-attacks, mind-control, incessant surveillance, and even the extinction of mankind. Prominent scientists in the AI research domain, such as Geoffrey Hinton, Andrew Yao, and Dawn Song, lent their voices to the following analysis.

The authors of the Science article are specifically worried about autonomous AI systems, which could, for instance, utilize computers independently in pursuit of their designated objectives. The scientists underscore the inherent risk in such programs, as they adhere closely to their parameters but have no comprehension of the intended outcome. "When autonomous AI systems pursue unwanted targets, we might be unable to maintain control," the paper states.

Like the recent past has known, these vividly foreboding warnings are by no means outstandingly novel. This time, they coincide with the AI Summit in Seoul, held from March 21st to 22nd. The meeting's initial day saw US tech powerhouses, like Google, Meta, and Microsoft, vow to venture responsibly into the AI realm.

The degree to which the pioneers of AI technology, OpenAI, are employing responsible practices came under more scrutiny as the AI summit opened. A former employee of OpenAI, Jan Leike, held accountable for making AI software safe for humans, criticized the boardroom's shortsightedness after his resignation. He brought to light that "chic products" have taken precedence over safety in recent years. Leike expressed apprehension that "crafting software smarter than humans is an incredibly hazardous enterprise." A more extensive understanding of AI control is desperately needed, he stated.

In response, Sam Altman, OpenAI's CEO, attempted to reassure the audience of their company's commitment to AI safety. Conversely, Yann LeCun, Meta Group's head of AI research, deemed that any urgency to tackle AI control would first necessitate the birth of AI systems "smarter than a house cat." Right now, it seems like someone in 1925 cautioned us that we urgently required understanding into how to handle passenger airliners that could race across the ocean at hypersonic speed. Analogously, it will take years before AI technology grows as smart as us - and, as with airplanes, safety precautions will synchronously develop.

Read also:

Source: www.ntv.de

Comments

Latest