Skip to content

How the Athletes in Paris should be protected from Online Hate

Sportspeople receive threats from harassment to murder threats online. In Paris, Artificial Intelligence is supposed to protect them. Can it succeed?

Anika Zillekens, the modern pentathlete, was inundated with online hate messages during the Tokyo...
Anika Zillekens, the modern pentathlete, was inundated with online hate messages during the Tokyo 2021 Games. According to the Olympia organizers,something like this shouldn't be possible in Paris this summer.

Olympia 2024 - How the Athletes in Paris should be protected from Online Hate

Modern Five-Fighter Annika Zillekens was on the gold list, but in Tokyo 2020, her refusal of the randomly assigned horse Saint Boy caused chaos. Both animal and rider were overwhelmed in the pressured situation. In her desperation, Zillekens excessively used spurs and whips, leading to accusations of animal cruelty in the media. Due to this decision, Modern Pentathlon will switch from riding to obstacle races in the discipline for the Paris Olympics, putting an end to the sports debate.

However, Zillekens experienced online torment after the competition. Her Instagram account became the target of a storm of defamation, insults, and even death threats. Zillekens was at her wit's end. Alone in Tokyo, she deactivated her account.

In Paris, the IOC intends to experience the hate posts, evaluate them, delete them from social networks, and ban the offending accounts – depending on the severity, like in the case of a death threat, legal action should be taken. Ideally, athletes remain unaware of it.

Online violence is part of modern sport

"Online violence is something that is ubiquitous in modern sport. It's something sinister that is practically everywhere in the digital age," says Kirsty Burrows. She is the head of the Safe Sport department at the International Olympic Committee IOC. For the Paris Olympics, all 11,000 athletes and officials are to be protected. The goal: the creation of a comprehensive online safe space for the tens of thousands of social media accounts of athletes and officials, if they do not actively reject it.

With the help of a large-scale AI monitoring system, the usual course of events is to be reversed: previously, the victims of cyberbullying had to find, screen, and report harmful comments. A process that is often distressing and painful for the victims. "Paris 2024 will help us better understand the typology of online violence in Sport," says Burrows. "Digital violence is not always what one might expect. It will be interesting to see what the triggers are. Through the Olympics and Paralympics in Paris, we will be able to make a better picture of this."

Billion-dollar challenge online

Burrows' task is to protect against discrimination and digital violence. A massive task that can only be handled with AI power: "If we take the industry average of 4% for harmful content, that's 20 million potential contributions we expect during the Games, amounting to half a billion. And comments are not even taken into account," Burrows calculates.

The five-member Safe Sport Unit team of the IOC has secured the support of the British company signify.ai, although the Olympic organizers do not officially confirm this. Signify.ai also remains silent on the matter when asked. The London-based company has already protected various international events in tennis or rugby from hate postings and can therefore draw on experience. The Olympics will undoubtedly be its biggest challenge. "The AI scans millions of data points using Natural Language Processing to specifically identify potentially threatening open-source data," Burrows explains.

However, hate postings must be publicly visible and not exchanged in private, such as in a chat between each other. "In order for us to take action, it must be a targeted, fixed threat." The accounts of an athlete or official must therefore be directly addressed, for example, through a hashtag or comment.

In the case of German athlete Zillekens, the language algorithm would have scanned all comments on her Instagram post or searched for variations of her name or the horse's name in other networks to evaluate postings containing threats against her. Reported cases would then be reviewed by an employee for deletion or further investigation.

Threatening – that's what everything should be, according to applicable Digital Laws, such as threats. The AI also detects suggestive posts similar to comments by satirist El Hotzo after the attack on Donald Trump, as Trump's name was mentioned, and El Hotzo has a large reach. In such complex cases, the final decision, such as whether it leads to an indictment, is not made by the Artificial Intelligence but by a human.

"A proprietary threat algorithm is applied. The data is categorized based on the type of violence. Then it goes to a human triage level," says Burrows. Ultimately, people at signify.ai will evaluate ambiguous or extreme cases to determine the context in which the statements were made.

Weibo and VK are kept out

Our machine cannot find everything, as it makes a difference on which platform one acts. "We block the networks that have the most users, such as Facebook, Instagram, TikTok, and X," says Burrows. If athletes are harshly targeted on the lively Chinese or Russian counterparts Weibo or VK, these cases will be bypassed in the AI screening. "However, if someone is deliberately insulted on Weibo or another social media platform that is not covered, our systems and services, especially the IOC department, are still there to help," says Burrows.

No matter which platform, ideally, the perpetrators of online violence will be suspended or, if necessary, legally pursued. However, this approach is challenging due to different legal frameworks. "Jurisdiction applies to the place where the person causing damage resides," says Burrows. A legal team will therefore handle all cases reported at the games.

Which law applies at a global event online?

But what happens with politically controversial statements that could lead to a conviction in the country? Or where there are violent conflicts like in Israel or Ukraine, which are also digitally waged, in part through bots. "Our system is in no way used for monitoring based on broader geopolitical campaigns or sentiments or policies," emphasizes Burrows.

A sports event that relies on the competition between nations can quickly trigger a statement that a small country considers critical or unpatriotic: Russia, China, Egypt, Indonesia, or Iran, among others, have already prosecuted people for critical, unpatriotic, or blasphemous comments in the network.

"It's really about targeted threats against athletes and officials, including protected characteristics, such as nationality," assures Burrows. One of the team's most challenging tasks will be to identify defamatory statements from political actors and secure the obtained data from external access by such actors.

In response to the online harassment faced by athlete Annika Zillekens in Tokyo 2020, the International Olympic Committee (IOC) plans to use Artificial Intelligence (AI) for the Paris Olympics to monitor and combat digital violence. Kirsty Burrows, the head of the Safe Sport department at the IOC, aims to create a comprehensive online safe space for athletes and officials, protecting them from harm. The AI system, developed with the help of British company signify.ai, will scan millions of data points to identify potentially threatening comments, focused on targeted and fixed threats. However, social media platforms like Weibo and VK, which have a significant user base in China and Russia, will not be covered in the AI screening due to legal complexity.

Kirsty Burrows (right) heads the Safe Sport department at IOC. Her team will protect athletes from violence and hate online with the help of Artificial Intelligence.

Read also:

Comments

Latest

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria The Augsburg District Attorney's Office is currently investigating several staff members of the Augsburg-Gablingen prison (JVA) on allegations of severe prisoner mistreatment. The focus of the investigation is on claims of bodily harm in the workplace. It's

Members Public