Skip to content

China will enforce loyalization of AI

I will continue learning

A sampling procedure tests so-called Large Language Models (LLM) on a whole series of questions.
A sampling procedure tests so-called Large Language Models (LLM) on a whole series of questions.

China will enforce loyalization of AI

China will align its AI-Chatbots politically. The powerful Internet regulatory body is examining whether the technologies of Alibaba and others adhere to "socialist values." The procedure is lengthy and poses a challenge for the companies.

China's powerful Internet regulatory body, the Cyberspace Administration of China (CAC), finds the AI-Sprachmodels of major Chinese tech companies a thorn in their side. As a result, the CAC is forcing ByteDance, the TikTok owner, and Alibaba, the e-commerce giant, to undergo mandatory government scrutiny of their models. According to the "Financial Times," this information comes from multiple sources involved in the process.

The goal is to ensure that the systems "reflect socialist values." A sampling procedure tests the so-called Large Language Models (LLM) on a wide range of questions. Many of these questions relate to politically sensitive topics and President Xi Jinping. The CAC sends a specialized team into companies to test the AI-Sprachmodels, a Financial Times employee from a Hangzhou-based AI company reports. The first attempt failed for unclear reasons. "It takes some time to speculate and adapt. We passed the second time, but the entire process took months," the anonymous source says.

The strict approval process forced China's AI companies to learn quickly how to censor their own language models effectively. "Our baseline model is very, very unrestrained, so security filtering is extremely important," quotes the newspaper a worker from a leading AI startup in Beijing. This begins with sorting out "problematic" information from the training data and creating a database of sensitive keywords. In February of this year, China published guidelines for AI companies. They state that thousands of sensitive keywords and questions must be collected and updated weekly.

Users of China's AI-Chatbots do not receive an answer to sensitive questions, such as the Tiananmen Square Massacre. Instead, they are either asked a different question or receive the response: "I haven't learned yet how to answer that question. I will continue to learn to serve you better." AI experts say this poses significant challenges for developers, as they must control the text generated by LLMs and the responses generated by the chatbots. "They build an additional layer to replace the answer in real-time," the "Financial Times" quotes Huan Li, who developed the Chatbot Chatie.IO. This means: A non-conforming response is generated and played initially, but disappears quickly thereafter.

To keep its population politically in line, Beijing has introduced its own AI-Chatbot, which is based on a new model of Xi Jinping's political philosophy. However, China, according to the "Financial Times," wants to avoid having the AI avoid all political topics. The CAC has limited the number of questions that LLMs can reject during the tests, a tech industry employee helping the companies through the process tells the newspaper. The standards introduced at the beginning of the year state that LLMs should not reject more than five percent of the questions posed to them.

The scrutiny extended by the Cyberspace Administration of China (CAC) isn't exclusive to ByteDance and Alibaba; other major Chinese tech companies, like those utilizing Artificial Intelligence, are also undergoing similar examinations. Aligning with the country's "socialist values," the CAC is ensuring that the AI models developed in China, including those of Alibaba, do not deviate from the political ideology.

Read also:

Comments

Latest