A Canadian who is one of the world’s leading thinkers on artificial intelligence says Canada’s proposed AI law needs to be passed as soon as possible.
“We urgently need agile AI legislation, and I think this law is moving in right direction,” Yoshua Bengio, scientific director of Mila, Quebec’s Artificial Intelligence Institute, told the House of Commons industry committee studying the proposed Artificial Intelligence and Data Act (AIDA) on Monday.
While other experts have told the committee that AIDA should be withdrawn for a complete redrafting or until more public consultation has been held, Bengio urged Parliament to pass AIDA soon — although with some amendments.
“An imperfect law with regulations to be adopted later is better than no law, or postponing it,” he said.
In fact, Bengio added, it’s so critical to oversee AI that some rules should come into effect as soon as AIDA is signed, rather than wait the expected two years or so while regulations with details about how some things in the bill will work are being written by the government.
For example, he said, as soon as the bill is signed into law, businesses would have to list AI systems above a set capacity, showing information about the system’s safety, security measures, and security assessments. The AI regulator — at the moment proposed to be an official of the Innovation ministry — would be able to use that information to create best-in-class requirements for future permits to continue developing and deploying advanced systems.
“This would put burden of demonstrating safety on developers with the billions required to build these advanced systems, rather than taxpayers,” Bengio said.
Computing systems that are as smart as humans (what he called “superhuman AI”) with the capacity for what experts call artificial general intelligence (AGI) may be online within two decades and “possibly in the next few years,” he said. But, he added, society isn’t ready.
“The current AI trajectory poses serious risk of major societal harms even before AGI is reached,” he said.
While progress in AI has opened what he called “exciting opportunities for numerous beneficial applications,” he noted, “it is urgent to establish the necessary guardrails to foster innovation while mitigating risks and harms.”
Briefly, AIDA would oversee classes of “high-impact” AI systems — such as those covering employment, providing services to an individual, processing biometric information for identification, moderating content on a search engine or social media platform, or being used by police. It would be illegal to deploy an AI system likely to cause serious physical, psychological or economic harm to an individual. Persons responsible for high-impact systems would have to establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the AI system.
In addition to amending the legislation so the registry would come into immediate effect, before regulations are set, Bengio also said there should be two other additions:
— the definition of a high-impact AI system should include “national security risks and societal threats,”
— and an AI developer should be required to show its safety and security before the system is fully trained and deployed. “We need to identify risks early in an AI lifecycle,” he explained.