Skip to content

AI experts seek permission to issue caution over dangers.

AI developers at startups like OpenAI are well-positioned to evaluate any potential risks. Nevertheless, some professionals express concern about potential repercussions if they offer negative feedback.

ChatGPT is the best-known chatbot that helped trigger the hype surrounding artificial intelligence...
ChatGPT is the best-known chatbot that helped trigger the hype surrounding artificial intelligence over a year ago.

Machine learning technology - AI experts seek permission to issue caution over dangers.

A bunch of artificial intelligence (AI) researchers, one of them the founder of ChatGPT from OpenAI, are pleading for the right to warn the general public about the potential dangers of software.

The existing protections for whistleblowers do not suffice, the experts argued in a widely shared open letter. This mainly relates to illegal activities by companies, but in many cases, there are no legal regulations for artificial intelligence. "A few of us are scared to speak out, as there have already been instances of retaliation in the industry," they admitted.

Shortly after, we learned about a recent example: Former OpenAI researcher Leopold Aschenbrenner spoke to the "Dwarkesh Podcast" that he was fired because he raised safety concerns about AI with the company's board.

The researchers urged organizations with advanced AI models to adopt four principles. This includes allowing employees to say negative things about their employers without fear of reprimand - a concern raised after OpenAI threatened former staffers with the expiration of their stock options if they spoke negatively about the company. OpenAI CEO Sam Altman apologized, claimed he wasn't aware of the clause, and reassured everyone that it had never been used.

The letter also suggests a process that lets employees anonymously inform their company boards and regulators about risks they see in AI software. If there are no internal channels, they should have the freedom to go public.

AI experts have been emphasizing the rapid development of artificial intelligence could lead to autonomous software that goes beyond human control. Possible outcomes include the spreading of misinformation, job losses, and even the potential annihilation of mankind. Governments are currently drafting regulations for AI software development, with OpenAI being a pioneer in this field due to the software that powers ChatGPT.

An OpenAI representative emphasized the company follows a "scientific approach to technology risks." Employees are allowed to anonymously voice their concerns but shouldn't share confidential information publicly, which could fall into the wrong hands.

Four current and two former employees of OpenAI signed the letter anonymously. Among the seven who made their names public are five former employees of OpenAI and a former employee of Google-owned DeepMind. Neel Nanda, an employee of DeepMind who previously worked for the AI startup Anthropic, stressed that he had no reasons to warn about what was happening at his current or former employer.

OpenAI's spokesperson announced a loss of trust led to Altman being temporarily removed from the board of directors in November, only for employees and Microsoft to back him up days later. After Altman's reinstatement, former board member Helen Toner explained it was because the committee only learned about the release of ChatGPT through the media - raising questions about whether the company might have made the technology too readily available without the necessary safety precautions.

Lately, OpenAI faced criticism after actress Scarlett Johansson asked why a voice from ChatGPT sounded so similar to hers, despite her turning down the offer to provide speech data for it.

So, AI researchers are asking for more freedom to warn the public about the potential dangers in software development, as well as more transparency and safety measures from companies and governments.

Read also:

Comments

Latest