Number of Canadians using GenAI could create risks for employers, KPMG survey reveals

One in five Canadians is using generative artificial intelligence (GenAI) tools to help them with their work or studies, according to a new study by KPMG in Canada released Tuesday at Collision 2023 in Toronto, however, not is all rosy.

Findings revealed that while many users are seeing a boost in productivity, there is a need for strong organizational controls and policies, as some users are entering sensitive data into their prompts, not verifying results, and claiming AI content as their own.

With the increased capabilities provided by generative AI tools, KPMG created Canada’s first-ever Generative AI Adoption Index, which measures how, when, and why Canadians are using generative AI tools in the workplace and at school, in order to analyze the risks and benefits to organizations and society.

The inaugural Canadian Generative AI Index is 11.9, highlighting that the tool has become part of workers’ and students’ lives in short order, but the overall penetration rate remains relatively low. A score of 100 indicates mass adoption.

Zoe Willis, national leader in data, digital and analytics and partner in the firm’s GenAI practice, said “it is absolutely critical for organizations to have clear processes and controls to prevent employees from entering sensitive information into generative AI prompts or relying on false or misleading material generated by AI.

“This starts with clearly defined policies that educate your employees on the use of these tools. Organizational guardrails are essential to ensure compliance with privacy laws, client agreements and professional standards.”

The survey of 5,140 Canadians found that 1,052, or 20 per cent, have used GenAI to help them do their jobs or at school. According to a release, the most “common uses include research, generating ideas, writing essays, and creating presentations. Respondents say the use of the technology has enhanced productivity and quality, created revenue and increased grades but, in the process, they are engaging in potentially risky behaviour that could create risks for their employers.”

Other findings revealed that:

  • Among GenAI users, nearly one-quarter (23 per cent) of working professionals said they are entering information about their employer (including its name) into prompts, and some are even putting private financial data (10 per cent)or other proprietary information such as human resources or supply chain data (15 per cent) into their prompts.
  • When it comes to checking the accuracy of content generated by AI platforms, just under half (49 per cent) of users said they check every time, while 46 per cent check sometimes. Generative AI platforms have been known to produce content that is misleading or inaccurate, often known as “hallucinations.”

“Data is an organization’s most valuable asset, and it needs to be safeguarded to prevent data leaks, privacy violations, and cybersecurity breaches, not to mention financial and reputational damage,” said Willis.

“Organizations might need to look at creating proprietary generative AI models with safeguarded access to their own data – that’s a critical step to reducing the risk of sensitive data leaking out into the world and getting into the hands of potentially malicious actors.”

Of the 20 per cent of Canadians who use GenAI, 18 per cent use it daily or for every task, 34 per cent use it a few times per week, and 26 per cent a few times per month.

Just over half of users said generative AI tools save them up to five hours per week, and two-thirds (67 per cent) said the time saved by using generative AI tools has allowed them to take on additional work that they otherwise would not have had the capacity for. Two-thirds (65 per cent) said using generative AI is essential to address their workloads.

“From here on, the most productive employees will be the ones empowered by generative AI technology,” Willis said. “Highly skilled employees with institutional knowledge of their organizations will be especially well-positioned to leverage (it) because they have the context that helps them create more effective prompts, which in turn yields more relevant results, giving them the power to make better decisions in a fraction of the time.

Organizations that embrace generative AI and arm their people with it responsibly will see major gains in workforce productivity.”

In an interview on Monday with IT World Canada prior to the opening of Collision, she said the firm “really wanted to get a perspective around the thinking in Canada to (determine) what is really happening and whether people are leveraging this technology. As you know, there has been a lot of a lot of noise and a lot of take up all over the world around generative AI more broadly. We really wanted to do a bit of a pulse to see how are people really using it and people adopting it. There are very mixed results in the market.

“For me, it is all about the data – the more data you can give me, the happier it makes me because it means that we have much more insight. We can then actually provide advice to our employees, our clients, to the industry, to the markets, in a much better, more informed way.”

Ven Adamov, co-leader of KPMG in Canada’s Responsible AI framework and a partner in KPMG’s Generative AI practice, said in the release that the “lack of transparency underscores the need for organizations to implement strong frameworks, controls, processes, and tools to ensure AI systems are being used in a trustworthy manner.

“Generative AI tools are potentially transformative for employee productivity, but the reality is employees do not always use them responsibly. Implementing a responsible AI framework – which includes both policies and tools that identify and mitigate risks with AI output – can help protect against misuse of this powerful technology.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Paul Barker
Paul Barker
Paul Barker is the founder of PBC Communications, an independent writing firm that specializes in freelance journalism. His work has appeared in a number of technology magazines and online with the subject matter ranging from cybersecurity issues and the evolving world of edge computing to information management and artificial intelligence advances.

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now