Building trustworthy AI

It has been almost five years since sensationalized headlines first started capturing our attention in an attempt to warn us of the pending overhaul of modern-day society at the hands of artificial intelligence (AI). However, while we are still not yet shuttled around by self-driving cars, the reality is that automated decision-making systems are used extensively across a wide variety of domains, from loan processing and employment hiring, to customer service engagement and product quality control. 

While global businesses are acutely aware of the importance of having trustworthy AI, the consensus is that the maturation of AI has been slower than expected due to the challenges encountered in its commercialization, including a lack of skilled talent, privacy-protective data acquisition and cleansing practices and, perhaps most importantly, the need to promote broader societal trust in the technology. Trustworthy and explainable AI is critical to business. The Global AI Adoption Index 2021 reports that 91 per cent of businesses using AI say their ability to explain how it arrived at a decision is critical. 

The Canadian AI ecosystem in particular has long advocated that trust in AI is essential. Canada boasts of one of the most respected research communities in the world, credited with many of the discoveries and scientific innovations that have led to revolutionary advancements in AI technologies. However, Canadian businesses have been particularly slow to implement and commercialize AI solutions, out of an abundance of ‘classic Canadian caution’ that is underpinned by a lack of trust in AI.

Trust is the bedrock of AI and it becomes essential that people can trust the process and results of that AI.  A 2019 AI expert roundtable, for example, highlighted explainability, bias, and diversity as focus areas for financial institutions as they adopt and develop best practices on the responsible use of AI. Similarly, a recent survey from the IBM Institute of Business Value cites that AI explainability is increasingly important among business leaders and policymakers. 

With the federal government’s tabling of Bill C-11 or the Digital Charter Implementation Act in the previous session of Parliament, it too was acknowledging the vital role policy and regulation must play. While the bulk of the bill consisted of a legislative regime governing the collection, use, and disclosure of personal information for commercial activity in Canada, for the first time, it also included provisions for the regulation of AI or autonomous decision-making systems.

As stated in Digital Charter Implementation Act, 2020, “Businesses would have to be transparent about how they use such systems to make significant predictions, recommendations or decisions about individuals. Individuals would also have the right to request that businesses explain how a prediction, recommendation or decision was made by an automated decision-making system and explain how the information was obtained.

Explainability, fairness, transparency, robustness and privacy are the pillars of a foundation upon which trusted and responsible AI can be built. 

As Canadian organizations struggle to commercialize AI systems in hopes of reaping the promised benefits, it will be important to contemplate how to build out such capabilities. In pursuit of those goals, organizations should take the following into consideration.

One explanation does not fit all

Beyond the technical challenges of explaining black box AI systems, explanations will differ depending on the persona in question. For example, the data scientist striving to improve model accuracy requires different metrics of explainability, compared to the loan officer explaining to their client why a loan application was denied, or the regulator who has to prove that a system does not discriminate.  

There are different approaches

There is a variety of techniques that can be leveraged to explain machine learning models.  One approach, for example, considers directly interpretable models such as decision trees, Boolean rule sets, and generalized additive models that are inherently easily understood, while post hoc techniques first train a black box model and then build another explanation model on top of the black box model. Another approach is global versus local explanations to compare entire model behaviour versus explanations for single sample points.

The role of transparency

Users want to understand how a service works, its functionality, and its strengths and limitations. Transparent AI systems earn trust when they share what data was collected for training, how it will be used and stored, and who has access to it. They make their purpose clear to users.  

It is no longer sufficient to strive only for accuracy when building AI systems. The foundational pillars of trustworthy AI must be interwoven into the system design from the beginning and not bolted on as afterthoughts. It is essential that organizations give careful consideration to the platforms upon which they build and deploy algorithms and autonomous decision-making systems, to ensure that all aspects of the AI pipeline are supported. Only then will we see truly life-changing AI systems, built and deployed with confidence, in which we can place sufficient trust.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Marija Mijalkovic
Marija Mijalkovic
Marija Mijalkovic is the AI Solutions Technical Leader at IBM Canada. She has over 25 years of client experience, building and developing trusted advisor relationships with Canada’s largest enterprise and public sector organizations. A disruptive innovation evangelist, Marija is committed to the challenge of questioning the status quo and helping clients to envision and execute upon disruptive technology strategies like heterogeneous computing, AI, and Quantum that deliver measured value and drive business results. She is a respected and recognized technology thought leader, AI/Machine/Deep Learning Champion, and community leader. Marija is a mentor to the IBM women in technical sales, IBM Summit Program (new hires) and external to IBM. She is also the Alumni Representative for the Community Affairs and Gender Issues Committee at the University of Toronto.

Featured Download

IT World Canada in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Latest Blogs

Senior Contributor Spotlight