Skip to content

China forces Tech-Giants to censorship

Line loyalty desired

A sampling procedure tests so-called Large Language Models (LLM) on a whole range of questions.
A sampling procedure tests so-called Large Language Models (LLM) on a whole range of questions.

China forces Tech-Giants to censorship

China will align its KI-Chatbots politically. The powerful Internet regulatory body is examining whether the technologies of Alibaba and others uphold "socialist values". The procedure is lengthy - and poses a challenge for the companies.

China's powerful Internet regulatory body, the Cyberspace Administration of China (CAC), finds the KI-Sprachmodels of the major Chinese tech companies a thorn in the eye. The CAC therefore forces ByteDance, the TikTok owner, and Alibaba, the E-Commerce giant, among others, to undergo an obligatory review of their models by the government. According to the "Financial Times", several people involved in the process have reported this.

It is ensured that the systems "reflect socialist values". A sample procedure tests the so-called Large Language Models (LLM) on a whole range of questions. Many of these relate to politically sensitive topics and President Xi Jinping. The CAC sends a special team into companies to test the KI-Sprachmodels, reports an employee of a Hangzhou-based AI company to the "Financial Times". The first attempt failed for unexplained reasons. "It takes a certain time to speculate and adapt. We passed the second time, but the whole procedure took months", reports the anonymous source.

The strict approval process forced China's AI companies to learn quickly how to censor their own language models best. "Our basic model is very, very unrestrained, so security filtering is extremely important", quotes the newspaper an employee of a leading AI startup in Beijing. This begins with sorting out "problematic" information from the training data and creating a database of sensitive keywords. In February of this year, China published a guide for AI companies. It states that thousands of sensitive keywords and questions must be collected and updated weekly.

Users of China's AI-Chatbots receive no answer to sensitive questions, such as about the Tiananmen Square Massacre. They are either advised to ask another question or receive the response: "I haven't learned yet how to answer that question. I will continue to learn to serve you better." AI experts report that this poses significant challenges for developers, as they must control the text generated by LLMs and the models themselves. "They build an additional layer to replace the answer in real time", quotes the "Financial Times" Huan Li, who developed the Chatbot Chatie.IO. This means: A non-conforming response is generated and played out initially, but disappears shortly thereafter.

To keep its population politically in line, Beijing has introduced its own KI-Chatbot, which is based on a new model of Xi Jinping's political philosophy. However, China, according to the "Financial Times", wants to avoid having the AI avoid all political topics. The CAC has limited the number of questions that LLMs can reject during the tests, reports a worker who helps tech companies through the process to the newspaper. The standards presented at the beginning of the year state that LLMs should not reject more than five percent of the questions asked of them.

Alibaba, being one of the tech companies under scrutiny, is required to adjust its AI language models to align with China's "socialist values". The strict approval process for AI companies in China has led Alibaba to improve its censorship capabilities, ensuring its models avoid sensitive topics.

Read also:

Comments

Latest