Skip to content

Brands should avoid this popular term. It’s turning off customers

A study published in the Journal of Hospitality of Marketing & Management finds that consumers are very much turned off by products that say they are “AI-powered”

GettyImages-2065406501 copy.jpg
GettyImages-2065406501 copy.jpg

A study published in the Journal of Hospitality Marketing & Management in June found that describing a product as using AI lowers a customer’s intention to buy it. Researchers sampled participants across various age groups and showed them the same products – the only difference between them: one was described as “high tech” and the other as using AI, or artificial intelligence.

“We looked at vacuum cleaners, TVs, consumer services, health services,” said Dogan Gursoy, one of the study’s authors and the Taco Bell Distinguished Professor of hospitality business management at Washington State University, in an interview with CNN. “In every single case, the intention to buy or use the product or service was significantly lower whenever we mentioned AI in the product description.”

Despite AI’s rapid advancement in recent months, the study highlights consumers’ hesitance to incorporate AI into their daily lives – a marked divergence from the enthusiasm driving innovations in big tech.

The role of trust...

Included in the study was an examination of how participants viewed products considered “low risk,” which included household appliances that use AI, and “high risk,” which included self-driving cars, AI-powered investment decision-making services and medical diagnosis services.

While the percentage of people rejecting the items was greater in the high-risk group, non-buyers were the majority in both product groups.

There are two kinds of trust that the study says play a part in consumers’ less-than-rosy perception of products that describe themselves as “AI-powered.”

The first kind, cognitive trust, has to do with the higher standard that people hold AI to as a machine they expect to be free from human error. So, when AI does slip up, that trust can be quickly eroded.

Take Google’s AI-generated search results overview tool, which summarizes search results for users and presentsthem at the top of the page. Peoplewere quick to criticizethe company earlier this year for providing confusing and even blatantly false information to users’ questions, pressuring Google to walk back some of the features’ capabilities.

Gursoy says that limited knowledge and understanding about the inner workings of AI forces consumers to fall back on emotional trust and make their own subjective judgments about the technology.

“One of the reasons why people are not willing to use AI devices or technologies is fear of the unknown,” he said. “Before ChatGPT was introduced, not many people had any idea about AI, but AI has been running in the background for years and it’s nothing new.”

Even before chatbotChatGPT burst into public consciousness in 2022, artificial intelligence was used in technologybehind familiar digital services, from your phone’s autocorrect to Netflix’s algorithm for recommending movies.

And the way AI is portrayed in pop culture isn’t helping boost trust in the technology either. Gursoy added that Hollywood science fiction films casting robots as villains had a bigger impact on shaping public perception towards AI than one might think.

“Way before people even heard about AI, those movies shaped people’s perception of what robots that run by AI can do to humanity,” he said.

...and a lack of transparency

Another part of the equation influencing customers isthe perceived risk around AI – particularly with how it handles users’ personal data.

Concerns about how companies manage customers’ data have tamped down excitement around tools meant to streamline the user experience at a time when the government is still trying to find its footing on regulating AI.

“People have worries about privacy. They don’t know what’s going on in the background, the algorithms, how they run, that raises some concern,” said Gursoy.

This lack of transparency is something that Gursoy warns has the potential to sour customers’ perceptions towards brands they may have already come to trust. It is for this reason that he cautions companies against slapping on the “AI” tag as a buzzword without elaborating on its capabilities.

“The most advisable thing for them to do is come up with the right messaging,” he said. “Rather than simply putting ’AI-powered’ or ’run by AI,’ telling people how this can help them will ease the consumer’s fears.”

The findings suggest that the inclusion of 'AI' in product descriptions, such as 'AI-powered investment decision-making services,' can lower a customer's intention to use or buy the product, as consumers often have a higher standard for AI and can be wary due to a lack of transparency. Additionally, the study highlights that the role of emotional trust in AI usage is significant, as consumers' limited understanding of the technology can lead them to form subjective judgments.

Read also:

Comments

Latest

A volunteer helps the residents of a village in the Ukrainian border region of Sumy to transport...

5:00 PM Selensky thanks for "filling up the exchange fund"

5:00 PM Selensky thanks for "filling up the exchange fund" 16:27 Ukraine Attacks Gas Platform with 40 Russian Soldiers in Black Sea Ukrainian naval forces claim to have killed approximately 40 Russian soldiers in an attack on a gas platform in the Black Sea. According to

Members Public