Innovation Minister François-Philippe Champagne this morning announced a new voluntary code of conduct that identifies measures pertaining to the responsible development and management of advanced generative AI systems.
He made the announcement at ALL IN, a two-day conference in Montreal, organized by Scale AI, that is convening industry heavyweights from over 20 countries to discuss Canadian AI.
“Generative AI breakthroughs have important impacts for society,” Champagne said. “We’re at the point where we must take action. Clear frameworks are necessary to make sure that we’re building trust.”
The code outlines measures around the following principles:
- Accountability – Implement a clear risk management framework proportionate to the scale and impact of activities. Share information on best risk management practices and employ multiple lines of defense including third-party audits
- Safety – Perform impact assessments, take steps to mitigate risks, malicious or inappropriate uses
- Fairness and Equity – Test systems for biases throughout their lifecycle, implement diverse training methods
- Transparency – Publish information on capabilities and limitations of AI systems, develop methods to identify output generated by AI, disclose type of training data used and ensure that systems that could be mistaken for humans are clearly identified as AI.
- Human oversight – Ensure systems are monitored and incidents are reported and acted on
- Validity and Robustness – Conduct testing, red teaming, and benchmarking against recognized standards to ensure systems operate effectively and are secured against attacks.
These measures, the Innovation, Science and Economic Development Canada (ISED) revealed, will provide a critical bridge between now and when Bill C-27, the government’s proposed AI and Data Act (AIDA), would be coming into force.
Proposed over a year ago, Bill C-27 is tasked to promote the responsible design, development and use of AI systems in Canada’s private sector, with a focus on high-impact systems affecting health, safety, and human rights.
The legislation has faced extensive scrutiny, including yesterday at the House of Commons Standing Committee on Industry and Technology, with critics urging the minister to address language poorly defined in AIDA, commit to more active consultation with stakeholders beyond industry insiders and expand AI regulation to public sectors as well.
Champagne explained at ALL IN, “After meeting with experts, we realized that while we are developing a law here in Canada, it will take time,” adding, “if you ask people on the street, they want us to take action now to make sure that we have specific measures that companies can take now to build and trust in their AI products.”
Companies like Cohere, OpenText, Appen, Blackberry and more have signed to commit to the code of conduct.
In the coming days, the government will publish a summary of feedback it received during the consultation it carried out with stakeholders for the development of the code of conduct.