Skeptical about explainability of AI-derived recommendations?

As a senior executive or CIO, how can you assure yourself that Artificial Intelligence (AI) or Machine Learning (ML)-derived recommendations are reasonable and flow logically from the project work that has been performed?

You want to support and encourage your team’s work, but you don’t want to be unwittingly misled, and you want to confirm that the data science team has not misled itself.

“Organizations need explainable AI/ML results to build confidence in the technology and to minimize the risk of being misled by subtle biases or catastrophic recommendations,” says Amy Hodler, AI Technical Evangelist at Fiddler AI in Palo Alto, California. “Explainability applies to both design and operation of AI/ML applications.”

Here are some high-level questions that you can ask the team about explainability. They’re designed to raise everyone’s assurance that the AI/ML recommendations are sound, and can be confidently implemented even though you and everyone else know that you’re not an expert. Start by selecting one question that you’re most concerned about and most comfortable asking.


Explainable artificial intelligence (XAI) is a set of processes and methods that allows human end-users to comprehend and trust the results and output created by machine learning algorithms. The confidence you can have in AI/ML-derived recommendations is dependent on the project design, including explainability features.

Here are some related questions that will illuminate potential issues in your project’s explainability of results:

  1. What steps did you take to enhance trust and confidence in the model’s results?
  2. How would you characterize your model’s accuracy, explainability, and transparency?
  3. Have you performed a model risk assessment?
  4. To what extent did the model design incorporate traceability?
  5. Does the AI/ML application you want to deploy trigger an alert when the model deviates from the expected results?
  6. What is your strategy for monitoring the AI/ML application to ensure that it continues to deliver the expected results?

Evaluate answers

Here’s how to evaluate the answers that you’ll receive to these questions from your data science team:

  1. If you receive blank stares, this means the topic of your question has not been addressed and requires more attention before the recommendations should be adopted. It will be necessary to add missing skills to the team or even replace the entire team.
  2. If you receive a lengthy answer filled with a lot of data science jargon or techno-chatter, the topic has not been addressed sufficiently, or not at all. Your team may lack the critical skills required to deliver confident recommendations. Your confidence in the recommendations should decrease or even disappear.
  3. Your confidence in the work should increase if you receive a thoughtful response that points to uncertainties and risks associated with the recommendations.
  4. If you receive a response that describes potential unanticipated consequences, your confidence in the recommendations should increase.
  5. If additional slides support the answers you receive with relevant figures and charts, your confidence in the team should increase significantly.
  6. If the project team acknowledges that the topic of your question should receive more attention, your confidence in the team should increase. To remedy the shortfall, it will probably be necessary to allocate more resources, such as external data science consultants.

For a summary discussion of the topics you should consider as you seek to assure yourself that AI/ML recommendations are reasonable, please read this article: Skeptical about AI-derived recommendations? Here are some tips to get you started.

Now what’s left is to ask yourself: what ideas can you contribute to help senior executives assure themselves that the AI/ML-derived recommendations are reasonable and flow logically from the project work performed?

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada
Yogi Schulz
Yogi Schulz
Yogi Schulz has over 40 years of Information Technology experience in various industries. Yogi works extensively in the petroleum industry to select and implement financial, production revenue accounting, land & contracts, and geotechnical systems. He manages projects that arise from changes in business requirements, from the need to leverage technology opportunities and from mergers. His specialties include IT strategy, web strategy, and systems project management.

Featured Download

IT World Canada in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Latest Blogs

Senior Contributor Spotlight