NuEnergy.ai, a Canadian tech company focused on the governance of artificial intelligence (AI), last week announced the launch of hosted Machine Trust Platform (MTP) software designed to support ethical and transparent governance and measurement of AI deployments.
Developed in conjunction with the Royal Canadian Mounted Police (RCMP), the MTP gives organizations configurable one-stop access to qualified, globally-sourced AI governance measurements and assessments to protect against drift in the four key trust parameters of AI applications: privacy, ethics, transparency, and bias, explained chief executive officer of NuEnergy.ai Niraj Bhargava in a conversation with IT World Canada.
“You need to get the guard rails up early when you build a superhighway – you don’t wait for the first accident to decide you need guard rails on the highway. The guard rails may not be perfect to stop every accident from happening, but it’s a responsibility that we must fulfill. We know we have enough to start with those guard rails, but those guardrails need to continue to evolve. So we believe we do have techniques to measure bias and privacy concerns and explainability so that we can sleep well as we apply the AI,” said Bhargava.
RCMP is the first testing department approved through Innovation, Science and Economic Development Canada (ISED) and the government of Canada’s Innovative Solutions Canada (ISC) program to test this research and development (R&D) innovation.
NuEnergy.ai says it will work with the RCMP to develop a framework for responsible AI governance and to test the initial MTP software. The implementation of a fully configured platform will follow an executive education program on AI Governance, and an AI Governance Framework co-creation process.
Certain components of the platform are already commercially available and in use by 20 Canadian organizations. Modules of the subscription-based AI Governance software currently available on the platform allow NuEnergy.ai clients to create and monitor self-assessment scorecards for the AI Governance Benchmark and measure AI trustworthiness to a predefined index. They also enable organizations to measure the trustworthiness of machine learning algorithms on dimensions including privacy, bias, and explainability. In addition, they allow the clients to integrate qualified governance tools and standards from global providers, as well as build and share dashboards reporting against AI Governance standards and identify areas requiring improvement, Bhargava explained.
He said that the application of some tools to specific dataset models is not available today off the shelf, but may be custom configured at the client’s request.
Bhargava says NuEnergy.ai is doing different things in different sectors of the economy, with the federal government being its primary market.
“There’s a high sensitivity to these topics of ethics of AI in the government, but we’re also doing work with the CIO Strategy Council, and with tech companies across Canada that are developing AI and want to do it ethically and showcase themselves and differentiate themselves. And we’re also working with boards of directors, imparting education around the topic, because it’s a fiduciary responsibility of corporations to make sure that they’re governing their machines and algorithms,” he said.
This highlights the government of Canada’s awareness that its increased deployment of AI can be extremely beneficial, but also needs to be transparent regarding how it is used.