Site icon IT World Canada

Canadian privacy czars release principles for responsible development of AI

On the heels of cybersecurity guidance for generative AI systems issued by the federal government, Canada’s federal, provincial, and territorial privacy regulators have issued their own set of privacy-related principles to be followed.

Announced Thursday, the principles are aimed at advancing the responsible, trustworthy and privacy-protective development and use of generative artificial intelligence (AI) technologies in this country.

While Parliament is debating the proposed Artificial Intelligence and Data Act (AIDA), which would put mandatory rules around high-risk AI systems, the law likely won’t come into effect for several years. Governments and regulators hope in the meantime the guidelines will give application developers, businesses, and government departments some idea of how far they should — or shouldn’t — go.

And though laws regulating AI aren’t on the books yet, the regulators note that organizations developing, providing, or using generative AI have to follow existing privacy laws and regulations in Canada.

Also on Thursday, the government announced that eight more companies have signed on to its voluntary AI Code of Conduct. They include AltaML, which helps firms with AI solutions; BlueDot, which uses AI to track infectious diseases; solutions provider CGI, Kama.ai, which uses AI for building marketing and customer relationship applications; IBM, Protexxa, which offers a SaaS cybersecurity platform; Resemble Ai, which lets organizations create human-like voices for answering queries in call centres; and Scale Ai, which helps firms create AI models. The voluntary code identifies measures that organizations are encouraged to apply to their operations when they are developing and managing advanced generative AI systems.

Federal Privacy Commissioner Philippe Dufresne announced the new principles document today at the beginning of an international Privacy and Generative AI Symposium organized by his office.

The document lays out how key privacy principles apply when developing, providing, or using generative AI models, tools, products and services. These include:

Developers are also urged to take into consideration the unique impact that these tools could have on vulnerable groups, including children.

The document provides examples of best practices, including implementing “privacy by design” into the development of the tools, and labeling content created by generative AI.

Exit mobile version