As more and more organizations develop and implement Artificial Intelligence (AI) or Machine Learning (ML) applications, questions about the reliability of the results are increasing. Some high-profile AI/ML lapses risk giving this technology a bad name. The related media reports have created nervousness among CIOs and senior management.

Real-world examples that have undermined society’s confidence in AI/ML applications include:

  • Risk assessment tools in the criminal justice system amplify racial discrimination
  • False arrests powered by facial recognition
  • Insurmountable hurdles in accessing public services
  • Unrecognized or uncorrected gender and racial biases
  • Tests of self-driving cars involved in traffic accidents
  • Environmental costs of giant server farms used for AI/ML applications

To avoid potentially thorny issues and headlines that damage the organization’s reputation, CIOs and senior management need a way to assess the design and performance of their AI/ML applications.

“Our members and other organizations have indicated that our standard has helped them incorporate responsible AI into their AI/ML applications,” says Keith Jansa, the Executive Director of the CIO Strategy Council (CIOSC).”

CIOSC accreditation by Standards Council of Canada

The CIOSC is a not-for-profit corporation providing a forum for members to transform, shape and influence the Canadian information and technology ecosystem, and is a Standards Development Organization (SDO) accredited by the Standards Council of Canada (SCC).

“Our public and private sector members see value in our standards in part because of the strength of our process,” says Keith Jansa. “We provide a neutral forum for standards development work using a consensus-based process that brings together a range of stakeholders and is accredited by the SCC.”

The CIOSC accreditation confers acceptance of the World Trade Organization (WTO) Technical Barriers to Trade (TBT) Annex 3 Code of Good Practice for the Preparation, Adoption and Application of Standards by Standardizing Bodies. That provides end-users assurance that the “Ethical design and use of automated decision systems” standard was developed using best practices.”

CIO Strategy Council standard

To help organizations achieve a reasonable level of assurance that the risks associated with their AI/ML applications are being comprehensively managed, the CIOSC developed the standard titled “Ethical design and use of automated decision systems (CAN/CIOSC 101:2019).” The standard provides organizations with an auditable framework for protecting human values and incorporating ethics in the design and operation of automated decision-making systems.

The value of the professionally developed standard is that using it is much faster and cheaper than trying to create your own framework.

Values-based principles for responsible AI

The CIOSC standard provides a framework and process to help organizations apply the values-based principles of responsible AI. The Organization for Economic Co-operation and Development (OECD) describes an excellent example of responsible AI principles in its Recommendation of the Council on Artificial Intelligence. The principles are:

  1. Inclusive growth, sustainable development and well-being.
  2. Human-centred values and fairness.
  3. Transparency and explainability.
  4. Robustness, security and safety.
  5. Accountability.

Being grounded in the responsible AI principles developed by the OECD provides the CIOSC standard credibility.

The framework of the CIOSC standard

The framework of the CIOSC standard focuses on the management of risk associated with AI/ML applications by encouraging the designers and operators to address a list of detailed questions for the following five topics related to automated decision systems:

  1. Risk management framework.
  2. Ethics by design.
  3. Deployment.
  4. Monitoring and maintenance.
  5. Appeals and escalations of decisions rendered by the system.

The many detailed questions, developed by a CIOSC Technical Committee with diverse representation, provide end-users assurance that the standard is comprehensive.

Would you recommend this article?

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication. Click this link to send me a note →

Jim Love, Chief Content Officer, IT World Canada
Previous articleProtecting your data centres with a multi-layered approach
Next articleHybrid cloud and AI megatrends that will shape business in 2022
Yogi Schulz
Yogi Schulz has over 40 years of Information Technology experience in various industries. Yogi works extensively in the petroleum industry to select and implement financial, production revenue accounting, land & contracts, and geotechnical systems. He manages projects that arise from changes in business requirements, from the need to leverage technology opportunities and from mergers. His specialties include IT strategy, web strategy, and systems project management.