Skeptical about the model used for AI-derived recommendations? Here are the questions to ask

As a senior executive or CIO, how can you assure yourself that Artificial intelligence (AI) or Machine Learning (ML)-derived recommendations are reasonable and flow logically from the project work that has been performed?

While you want to be supportive and encouraging of your team’s work, you don’t want to be misled inadvertently, and you want to confirm that the data science team hasn’t misled itself.

“Machine learning algorithms, used to create the model, vary widely in capability and appropriateness for the problem space,” says Victor Lee, Head of Product Strategy and Developer Relations at TigerGraph, a leading vendor of graph database software. “Sometimes it’s a simple algorithm selection process. At other times, it’s a judgment call, based on application priorities such as accuracy or speed”.

Here are some high-level questions that you can ask the team about the model. They’re designed to raise everyone’s assurance that the AI/ML recommendations are sound and can be confidently implemented even though you and everyone else know you’re not an expert. Start by selecting one question that you’re most concerned about and you’re most comfortable asking.

AI/ML model quality

AI/ML models vary widely in scope and complexity. The confidence you can have in AI/ML-derived recommendations is highly dependent on the adequacy of the model. Here are some related questions that will illuminate the AI/ML model quality:

  1. Is the model constructed for this project an entirely new model, or was it an adapted version of a previously successful model?
  2. If you sourced algorithms to create the model externally, how did you assure yourself that the algorithm software quality was adequate?
  3. How did you assure yourself that the project team members have sufficient AI/ML knowledge for the type of model chosen?
  4. How do we know that the model design is appropriate for the business situation we’re trying to model?
  5. How did you determine which algorithms to use to build your model?
  6. Of the algorithms that produced your model, which ones did you use from other sources, and which ones did you custom develop as a team?
  7. How did you determine that the algorithms are of adequate quality and contain few software defects?
  8. If you identified multiple algorithms, did you test more than one and compare the results for accuracy?
  9. How do we know that the model isn’t subtly, or worse significantly, modelling a different situation from what we’re trying to model?
  10. What effort did you make to recognize bias in your model?
  11. What steps did you take to address the bias you likely identified in your model?
  12. Have you documented and reviewed a list of assumptions embedded in the model?
  13. Did any of the model outputs surprise you either with the training data or the real-world data?

Evaluating answers

Here’s how to evaluate the answers that you’ll receive to these questions from your data science team:

  1. If you receive blank stares, that means your question’s topic has not been addressed and needs more attention before the recommendations should be accepted. It will be necessary to add missing skills to the team or even replace the entire team.
  2. If you receive a lengthy answer filled with a lot of data science jargon or techno-babble, the topic has not been sufficiently addressed, or worse, your team may be missing critical skills required to deliver confident recommendations. Your confidence in the recommendations should decrease or even evaporate.
  3. If you receive a thoughtful answer that references uncertainties and risks associated with the recommendations, your confidence in the work should increase.
  4. If you receive a response that describes potential unanticipated consequences, your confidence in the recommendations should increase.
  5. If the answers you receive are supported by additional slides containing relevant numbers and charts, your confidence in the team should increase significantly.
  6. If the project team acknowledges that your question’s topic should receive more attention, your confidence in the team should increase. It will likely be necessary to allocate more resources, such as external data science consultants, to address the deficiency.

For a summary discussion of the topics you should consider as you seek to assure yourself that AI/ML recommendations are sound, please read this article: Skeptical about AI-derived recommendations? Here are some tips to get you started.

What ideas can you contribute to help senior executives assure themselves that the AI/ML-derived recommendations are reasonable and flow logically from the project work performed? Let us know in the comments below.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Yogi Schulz
Yogi Schulzhttp://www.corvelle.com
Yogi Schulz has over 40 years of Information Technology experience in various industries. Yogi works extensively in the petroleum industry to select and implement financial, production revenue accounting, land & contracts, and geotechnical systems. He manages projects that arise from changes in business requirements, from the need to leverage technology opportunities and from mergers. His specialties include IT strategy, web strategy, and systems project management.

Featured Download

IT World Canada in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Latest Blogs

Senior Contributor Spotlight