As more and more organizations develop and implement Artificial Intelligence (AI) or Machine Learning (ML) applications, questions about the reliability of the results are increasing. Some high-profile AI/ML lapses risk giving this technology a bad name. The related media reports have created nervousness among CIOs and senior management.
Real-world examples that have undermined society’s confidence in AI/ML applications include:
- Risk assessment tools in the criminal justice system amplify racial discrimination
- False arrests powered by facial recognition
- Insurmountable hurdles in accessing public services
- Unrecognized or uncorrected gender and racial biases
- Tests of self-driving cars involved in traffic accidents
- Environmental costs of giant server farms used for AI/ML applications
To avoid potentially thorny issues and headlines that damage the organization’s reputation, CIOs and senior management need a way to assess the design and performance of their AI/ML applications.
“Our members and other organizations have indicated that our standard has helped them incorporate responsible AI into their AI/ML applications,” says Keith Jansa, the Executive Director of the CIO Strategy Council (CIOSC).”
CIOSC accreditation by Standards Council of Canada
The CIOSC is a not-for-profit corporation providing a forum for members to transform, shape and influence the Canadian information and technology ecosystem, and is a Standards Development Organization (SDO) accredited by the Standards Council of Canada (SCC).
“Our public and private sector members see value in our standards in part because of the strength of our process,” says Keith Jansa. “We provide a neutral forum for standards development work using a consensus-based process that brings together a range of stakeholders and is accredited by the SCC.”
The CIOSC accreditation confers acceptance of the World Trade Organization (WTO) Technical Barriers to Trade (TBT) Annex 3 Code of Good Practice for the Preparation, Adoption and Application of Standards by Standardizing Bodies. That provides end-users assurance that the “Ethical design and use of automated decision systems” standard was developed using best practices.”
CIO Strategy Council standard
To help organizations achieve a reasonable level of assurance that the risks associated with their AI/ML applications are being comprehensively managed, the CIOSC developed the standard titled “Ethical design and use of automated decision systems (CAN/CIOSC 101:2019).” The standard provides organizations with an auditable framework for protecting human values and incorporating ethics in the design and operation of automated decision-making systems.
The value of the professionally developed standard is that using it is much faster and cheaper than trying to create your own framework.
Values-based principles for responsible AI
The CIOSC standard provides a framework and process to help organizations apply the values-based principles of responsible AI. The Organization for Economic Co-operation and Development (OECD) describes an excellent example of responsible AI principles in its Recommendation of the Council on Artificial Intelligence. The principles are:
- Inclusive growth, sustainable development and well-being.
- Human-centred values and fairness.
- Transparency and explainability.
- Robustness, security and safety.
Being grounded in the responsible AI principles developed by the OECD provides the CIOSC standard credibility.
The framework of the CIOSC standard
The framework of the CIOSC standard focuses on the management of risk associated with AI/ML applications by encouraging the designers and operators to address a list of detailed questions for the following five topics related to automated decision systems:
- Risk management framework.
- Ethics by design.
- Monitoring and maintenance.
- Appeals and escalations of decisions rendered by the system.
The many detailed questions, developed by a CIOSC Technical Committee with diverse representation, provide end-users assurance that the standard is comprehensive.