Skip to content

Deepfake voices triggering brain responses

AI-induced deception of reality

The brain tries to correct computer-generated voices. However, this happens below the threshold of...
The brain tries to correct computer-generated voices. However, this happens below the threshold of perception.

Deepfake voices triggering brain responses

Artificial voice clones, created via deepfake technology, have been causing quite a stir, especially with scam attempts over the phone. These AI-generated voices trick victims into thinking they're talking to a real person, and the latest research reveals some surprising findings about how our brain reacts to them.

Fools' gold? It can be challenging to distinguish between an actual human voice and one that's been synthesized by AI. Although it might be unconscious, the brain responds differently to deepfake voices in comparison to natural ones, as reported by a research team in the Journal "Communications Biology". Listening to a forged voice seems to produce less pleasure, it appears.

Voice synthesis algorithms are now sophisticated enough to make identity markers of artificial voice clones quite similar to those of genuine speakers. These faked voices, fabricated by deepfake tech, could potentially be used in phone scams or to disguise voice assistants as favorite celebrities.

Researchers led by Claudia Roswandowitz from the University of Zurich evaluated how well human identity is preserved in voice clones by recording four German-speaking men's voices in 2020 and creating Deepfake voices for each speaker using computer algorithms.

Deepfake voices are almost perfect deceptions

They then tested the convincingness of the imitation by asking 25 participants whether the identities of two presented voices were identical or not. In about two-thirds of the trials, the Deepfake voices were correctly assigned to the respective speaker, indicating that current Deepfake voices are not perfect deceptions yet, but they certainly have the potential to mislead people.

The team utilized functional magnetic resonance tomography (fMRT) to examine how individual brain areas react to forged and real voices. They discovered differences in two key areas - the Nucleus Accumbens and the auditory cortex. Both regions, as the team explains, play a crucial role in determining whether a person recognizes a Deepfake voice as a forgery or not.

"The Nucleus Accumbens is an essential element of the reward system in the brain," clarified Roswandowitz. It was less active when a Deepfake and a natural voice were compared than when two real voices were compared. In other words, listening to a faked voice activates the reward system less.

The brain attempts to adapt

There was also a distinction in the analysis of the auditory cortex, responsible for analyzing sounds. The area was more engaged when it came to recognizing the identity of Deepfake voices. "We assume that this area reacts to the still not perfect acoustic imitation of Deepfake voices and tries to compensate for the missing acoustic signal," said Roswandowitz.

The cortex compensated for this discrepancy secretly. "Something signals to the consciousness that something is different and more difficult, but this often remains below the perception threshold."

AI technologies continue to develop

The researchers also investigated how well the brain can recognize Deepfake videos. They found that the brain responds differently to Deepfake videos than to real videos, particularly in the visual cortex and the limbic system. The team speculates that the brain aims to compensate for the inconsistencies in the Deepfake videos, but this compensation is yet to be perfect. The researchers presume that the brain's ability to recognize Deepfake media will also improve as AI technologies continue to advance.

With the swift growth of Artificial Intelligence technologies, the creation and distribution of Deepfakes have increasingly become prevalent, according to researchers in the study. Would today's Deepfakes, which were created four years ago, completely deceive listeners? Or would the results be similar? "That's a fascinating question," says Roswandowitz. New AI-generated voices probably have slightly better sound quality, which could lead to smaller activity differences in the auditory cortex when the study was conducted. However, she expects similar results in the Nucleus Accumbens. "It would be very interesting to experimentally investigate this."

Read also:

Comments

Latest