Canada, U.S. sign international guidelines for safe AI development

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems.

It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in the absence of legislation. Earlier this year, Ottawa and Washington announced similar guidelines for each of their countries.

The release of guidelines comes as businesses release and adopt AI systems that can affect people’s lives, without national legislation.

The latest document, Guidelines for Secure AI System Development, is aimed primarily at providers of AI systems who are using models hosted by an organization or are using external application programming interfaces (APIs).

“We urge all stakeholders (including data scientists, developers, managers, decision-makers, and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems,” says the document’s introduction.

The guidelines follow a ‘secure by default’ approach, and are aligned closely to practices defined in the U.K. National Cyber Security Centre’s secure development and deployment guidance, the U.S. National Institute for Standards and Technology’s Secure Software Development Framework, and secure by design principles published by the U.S. Cybersecurity and Infrastructure Security Agency and other international cyber agencies.

They prioritize
— taking ownership of security outcomes for customers;
— embracing radical transparency and accountability;
— and building organizational structure and leadership so secure by design is a top business priority.

Briefly

— for safe design of AI projects, the guideline says IT and corporate leaders should understand risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design;

— for secure development, it is recommended organizations understand AI in the context of supply chain security, documentation, and asset and technical debt management;

— for secure deployment, there are recommendations covering the protection of infrastructure and models from compromise, threat, or loss, developing incident management processes, and responsible release;

— for secure operation and maintenance of AI systems, there are recommendations for actions such as including logging and monitoring, update management, and information sharing.

Other countries endorsing these guidelines are Australia, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea and Singapore.

Meanwhile, in Canada, the House of Commons Industry Committee will resume hearings Tuesday on Bill C-27, which includes not only an overhaul of the existing federal privacy legislation, but also a new AI bill. So far, most of the witnesses have focused on the proposed Consumer Privacy Protection Act (CPPA). But several witnesses say the proposed Artificial Intelligence and Data Act (AIDA) deals with so many complex issues it should be split from C-27. Others argue the bill is good enough for the time being.

The government still hasn’t produced the full wording of amendments it’s willing to make to AIDA and CPPA to make the bills clearer.

AIDA will regulate what the government calls “high-impact systems,” such as AI systems that make decisions on loan applications or on an individual’s employment. The government says AIDA will make it clear that those developing a machine learning model intended for high-impact use have to ensure that appropriate data protection measures are taken before it goes on the market.

Also, the bill will clarify that developers of general-purpose AI systems like ChatGPT would have to establish measures to assess and mitigate risks of biased output before making the system live. Managers of general-purpose systems would have to monitor for any use of the system that could result in a risk of harm or biased output.

Meanwhile, the European Union is in the last stages of finalizing wording of its AI Act, which would be the first in the world. According to a news story, ideally this would be worked out by February, 2024. However, there are disagreements over how foundation models like ChatGPT should be regulated.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now