UNESCO sees danger from AI-generated historical falsification
Artificial Intelligence (AI) models have a tendency to invent historical events. They can also be trained with false content. This can impact the perception of the Holocaust, warns UNESCO. However, UNESCO has an idea to solve this problem.
UNESCO, the United Nations Cultural Organization, has warned against AI-generated Holocaust denial and falsification of history. If ethical principles for the application of AI are not established internationally, as UNESCO has already done, the representation of the Holocaust could be distorted and antisemitism could be fueled, according to a UNESCO and Jewish World Congress report.
Generative AI, such as that used in chatbots like ChatGPT, relies on data from the internet that can contain misleading content and human bias. This can result in inaccurate representations of certain events and the reinforcement of prejudices. Due to a lack of monitoring and moderation by AI developers, generative AI tools could be trained on data from Holocaust denial websites.
Moreover, malicious actors can use generative AI to falsify contents such as witness statements and historical records related to the Holocaust. Generatively created falsified images and audio content are particularly convincing for young people who encounter them on social media platforms. Generative AI models also have a tendency to invent events and even historical phenomena if they don't have access to sufficient data.
"If we allow the terrible facts of the Holocaust to be diluted, distorted, or falsified through irresponsible use of AI, we risk the explosive spread of antisemitism and the gradual deterioration of our understanding of the causes and consequences of these atrocities," said UNESCO Director-General Audrey Azoulay. The implementation of UNESCO recommendations on AI ethics is urgently required, so that the younger generation grows up with facts and not with forgeries.
UNESCO, in conjunction with the Jewish World Congress, has warned about the potential for AI-generated Holocaust denial and distortion of historical events, highlighting the risk of antisemitism fueling from such misrepresentations. If international ethical principles for AI applications are not established, a lack of monitoring and moderation by AI developers could lead to generative AI tools being trained on data from Holocaust denial websites. Furthermore, malicious actors can use these tools to create fake historical records related to the Holocaust, potentially leading to the deterioration of our understanding of these atrocities.