AI Experts Warn of Potential Risks to Human Existence
AI technology - a double-edged sword for humankind? Some professionals express their alarm over the hazards of this innovative breakthrough, while others in the field find such alarums excessive.
Noted AI experts have penned a new alarming report on the potential dangers of AI technology. "Failing to exercise caution, we could forever lose command over autonomous AI systems," state the researchers in a recent publication of Science journal. Anticipated perils of AI include massive cyber-attacks, mind-control, incessant surveillance, and even the extinction of mankind. Prominent scientists in the AI research domain, such as Geoffrey Hinton, Andrew Yao, and Dawn Song, lent their voices to the following analysis.
The authors of the Science article are specifically worried about autonomous AI systems, which could, for instance, utilize computers independently in pursuit of their designated objectives. The scientists underscore the inherent risk in such programs, as they adhere closely to their parameters but have no comprehension of the intended outcome. "When autonomous AI systems pursue unwanted targets, we might be unable to maintain control," the paper states.
Like the recent past has known, these vividly foreboding warnings are by no means outstandingly novel. This time, they coincide with the AI Summit in Seoul, held from March 21st to 22nd. The meeting's initial day saw US tech powerhouses, like Google, Meta, and Microsoft, vow to venture responsibly into the AI realm.
The degree to which the pioneers of AI technology, OpenAI, are employing responsible practices came under more scrutiny as the AI summit opened. A former employee of OpenAI, Jan Leike, held accountable for making AI software safe for humans, criticized the boardroom's shortsightedness after his resignation. He brought to light that "chic products" have taken precedence over safety in recent years. Leike expressed apprehension that "crafting software smarter than humans is an incredibly hazardous enterprise." A more extensive understanding of AI control is desperately needed, he stated.
In response, Sam Altman, OpenAI's CEO, attempted to reassure the audience of their company's commitment to AI safety. Conversely, Yann LeCun, Meta Group's head of AI research, deemed that any urgency to tackle AI control would first necessitate the birth of AI systems "smarter than a house cat." Right now, it seems like someone in 1925 cautioned us that we urgently required understanding into how to handle passenger airliners that could race across the ocean at hypersonic speed. Analogously, it will take years before AI technology grows as smart as us - and, as with airplanes, safety precautions will synchronously develop.
Read also:
- This will change in December
- Dikes withstand water masses so far - Scholz holds out the prospect of help
- Fireworks and parties ring in 2024 - turn of the year overshadowed by conflicts
- Attacks on ships in the Red Sea: shipping companies avoid important trade route
- In light of the potential risks discussed in the Science journal article, several tech companies, including Microsoft, have pledged to approach AI development with caution and responsibility at the AI Summit in Seoul.
- Google, along with other tech giants, has committed to using AI responsibly, as they recognize the importance of addressing the potential threats posed by artificial intelligence, such as IT security vulnerabilities and the misuse of AI for harmful purposes like cyber-attacks.
- Amidst these discussions on AI safety, prominent AI researchers like ChatGPT's co-creator, Luke Şarma, have emphasized the need for continued education and awareness to ensure that the development and use of artificial intelligence aligns with human values and aspirations, avoiding potential threats to mankind.
Source: www.ntv.de