How national standards can help define the future of AI

Institutions across the financial services sector have embraced the use of AI and machine learning to reduce costs and risk. Applications from automating client support interactions and assessing a client’s creditworthiness to detecting and preventing fraudulent credit card activities have boosted efficiency while also providing enhanced consumer protections.

Various levels of management face a host of decisions in both designing and deploying these highly automated technologies into a production environment. Financial institutions, in turn, are adapting their existing frameworks to deploy automated decision-making systems as the responsibility for risk management typically is shared across the “three lines of defence” – the business, risk management and internal audit.

Beyond regulated entities, the bodies that regulate and supervise the financial services sector are reviewing existing guidance and regulations to discern the extent to which they apply in the context of this new pervasive technology. Within their own operations, financial regulatory bodies are also experimenting with how AI methods can support electronic filings and the review and analysis of local and international regulations.

Meeting compliance obligations through national standards

Organizations developing intelligent systems are embracing new governance frameworks that can be measured and tested for conformity to increase consumer confidence in the applications of these systems.

In 2019, the CIO Strategy Council, an accredited standards body in Canada, published the first national standard for the ethical design and use of automated decision systems (CAN/CIOSC 101:2019). The standard provides organizations auditable criteria for protecting human values and incorporating ethics in automated decision-making systems.

Solutions to enable compliance are entering the market

In 2020, the CIO Strategy Council and KPMG Canada worked together to launch the AI Ethics Assurance Program. The program provides organizations, their customers, clients, and in the case of governments, citizens confidence in whether the organization’s controls meet the criteria set out in CAN/CIOSC 101:2019, Ethical design and use of automated decision systems.

Public and private sector organizations are applying the utility of the national standard in Canada and around the world. Organizations such as Prodago are leading the incorporation of the national standard into data and AI governance management platform services to help organizations assess their operating practices in managing ethical AI risks.

Building capacity and AI learning pathways

The use of the national standard also spans organizations’ alpha testing to build capacity. Financial service institutions can garner insights from across sector verticals to develop responsible AI solutions.  Examples include the Ontario government’s reference to the national standard in its Transparency Guidelines for Data-Driven Technology in Government and companies from NuEnergy.ai to Microsoft beginning to incorporate the national standard into executive learning and education programs.

Build trust through standards.

 

You can also read more coverage about the CIO Strategy Council’s standards here.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Keith Jansa and Paul Childerhose
Keith Jansa and Paul Childerhose
Keith Jansa is Executive Director of the CIO Strategy Council and Paul Childerhose is a board member of the Canadian RegTech Association and a 20-year veteran of the financial services industry.

Featured Download

IT World Canada in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Latest Blogs

Senior Contributor Spotlight