Canadians very trusting of generative AI despite threats to consumer safety and security: Capgemini

Six months ago, OpenAI’s ChatGPT captured the interests of experts and amateurs globally for its endless, growing capabilities but also for its many peculiar faux pas. 

Canadians, however, remain highly enthusiastic about the host of generative AI tools seeing the light of day at a breathtaking pace.

 A new Capgemini study that surveyed 10,000 consumers over the age of 18 across 13 countries, including 800 in Canada, found that Canadians are very trusting and aware of generative AI tools.

With over 70 per cent of Canadians showing trust in content written by generative AI, 67 per cent say they are even open to purchasing products or services recommended to them by the technology. 

Consumers globally are also excited about the prospect of using AI to edit content based on prompts, generate content, obtain general information, brainstorm, translate languages and more. The most excited countries are Australia (67 percent), Canada (66 percent), and Singapore (66 percent).

The report suggests that the enthusiasm may be due to the efficiency brought by these applications, notably, the availability of personalized content in ready-to-use formats  Endorsements from tech firms also increase consumer trust levels, the report noted. 

“Capgemini is excited about the potential of Generative AI,” said Steven Karan, vice president and head of insights at Capgemini. “We believe that the digital revolution, just like the industrial revolution, has the potential to assist humans and improve their productivity, enabling them to focus on more rewarding, “intelligent” tasks.”

However, Canadians are taking it a step further and even seeking financial, medical and relationship advice from generative AI tools. 

Sixty-nine per cent of Canadians, for instance, believe medical opinions from ChatGPT would be helpful, while 66 per cent of Canadians would seek advice for personal interactions, relationships, career, or life plans from generative AI.

A recent U.S. study compared responses to health questions from physicians and ChatGPT. A panel of licensed healthcare professionals, unaware of the source of the response, preferred ChatGPT’s responses 79 per cent of the time; they were rated as higher-quality and more empathetic than those from physicians.

The Capgemini report attributes such high levels of trust to the clear responses generated by tools such as ChatGPT, which consumers might equate with accuracy. 

Interestingly, the report shows that, globally, as household incomes of increase, trust levels increase as well. 

To that point, Karan questions whether there is a correlation between trust and access to the technology, adding, “Gen AI companies and governments alike need to ensure that as the technology matures, no inherit roadblocks exist that would prevent consumers across different income levels to benefit.”

But low public awareness of risks is also a clear driver of these high levels of trust, the report notes. Plus, so far, there is only a limited number of known cases of AI interactions violating consumer trust.

A subsequent lack of caution could leave consumers vulnerable to the dangers posed by these applications, notably, fake news, deep fakes, cyberattacks and plagiarism.

Alarmingly, the study shows that consumers, globally are even unconcerned about misinformation (49 per cent) and deep fakes (59 per cent) created by generative AI. Additionally, only about 30 per cent of consumers are concerned about the use of AI to commit phishing attacks. In general, millennials tend to be the least worried about these AI malpractices. Consumers in Sweden and Italy exhibit the highest level of concern.

“Consumers must use the tools cautiously, verifying the information provided, and should not be afraid to seek help in high-risk situations,” said Karan. “Regulators must strengthen legislation to protect consumers by requesting disclosures, establishing accountability, and improving user control. Used in a monitored, controlled fashion, generative AI can be an exciting tool that brings with it untapped potential for all.”

A significant minority of consumers globally, in fact, say they are conscious of the potential for unethical and malicious use of AI. One in three, for instance, are worried about the non-recognition/non-payment of artists whose work is used in the training of generative AI algorithms. Earlier this year, a number of artists filed lawsuits against AI image generators Stability AI, Midjourney, and Deviant Art, alleging that these organizations trained their AI models on their images, scraped from the internet, without the consent of the original artists.

The tendency of AI models to “hallucinate” (make up completely false information) is also concerning, especially with the level of trust conferred on these applications. And as large language models get folded into various systems, this tendency to hallucinate can contribute to the overall degradation of information quality and erode trust in the accuracy of information, the report warns.

Karan affirmed that consumers should avoid blindly trusting the output of generative AI, as it’s not completely reliable and the technology is still at the early stages of maturity. He added, “like any other AI, Generative AI is not ‘intelligent’ in itself. The intelligence stems from the human experts which these tools will assist and support. The key to success, as with any AI, is the safeguards that humans build around them to guarantee the quality of its output.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Ashee Pamma
Ashee Pamma
Ashee is a writer for ITWC. She completed her degree in Communication and Media Studies at Carleton University in Ottawa. She hopes to become a columnist after further studies in Journalism. You can email her at [email protected]

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now