Skeptical about the variables underlying the AI-derived recommendations?

As a senior executive or CIO, how can you assure yourself that artificial intelligence (AI) or Machine Learning (ML)-derived recommendations are reasonable and flow logically from the project work that has been performed?

While you want to be supportive and encouraging of your team’s work, you don’t want to be misled inadvertently, and you want to confirm that the data science team hasn’t misled itself.

“The quality of the features or variables in your data has a major impact on the quality of the recommendations you can expect,” says Richard Henderson, EMEA Team Lead Solution Architect, at TigerGraph. “Our work in fraud detection again demonstrated the importance of data quality for variables used in our algorithms.”

Here are some high-level questions that you can ask the team about the AI/ML variables. They’re designed to raise everyone’s assurance that the AI/ML recommendations are sound and can be confidently implemented even though you and everyone else know you’re not an expert. Start by selecting one question that you’re most concerned about and you’re most comfortable asking.

Confidence in the AI/ML variables

Some variables or features are always more important in their impact on AI/ML model outputs than others. The confidence you can have in AI/ML-derived recommendations is highly dependent on the project team having a good understanding of which variables are most important. Here are some related questions that will illuminate the confidence in the AI/ML model variables:

  1. Which variables is your model most sensitive to?
  2. How did you ensure that the data quality for the most critical variables is sufficient?
  3. How similar or different are the variable definitions in your model compared to equivalent variables we use to report our operations’ performance?
  4. To what extent did you revise the model as you gained more understanding about which variables are the most important variables?
  5. How did you avoid using variables that appear to be quite different but actually represent almost the same feature and thereby introduce bias into your model?

Evaluating answers

Here’s how to evaluate the answers that you’ll receive to these questions from your data science team:

  1. If you receive blank stares, that means your question’s topic has not been addressed and needs more attention before the recommendations should be accepted. It will be necessary to add missing skills to the team or even replace the entire team.
  2. If you receive a lengthy answer filled with a lot of data science jargon or techno-babble, the topic has not been sufficiently addressed, or worse, your team may be missing critical skills required to deliver confident recommendations. Your confidence in the recommendations should decrease or even evaporate.
  3. If you receive a thoughtful answer that references uncertainties and risks associated with the recommendations, your confidence in the work should increase.
  4. If you receive a response that describes potential unanticipated consequences, your confidence in the recommendations should increase.
  5. If the answers you receive are supported by additional slides containing relevant numbers and charts, your confidence in the team should increase significantly.
  6. If the project team acknowledges that your question’s topic should receive more attention, your confidence in the team should increase. It will likely be necessary to allocate more resources, such as external data science consultants, to address the deficiency.

For a summary discussion of the topics you should consider as you seek to assure yourself that AI/ML recommendations are sound, please read this article: Skeptical about AI-derived recommendations? Here are some tips to get you started.

 

What ideas can you contribute to help senior executives assure themselves that the AI/ML-derived recommendations are reasonable and flow logically from the project work performed? Let us know in the comments below.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Yogi Schulz
Yogi Schulzhttp://www.corvelle.com
Yogi Schulz has over 40 years of Information Technology experience in various industries. Yogi works extensively in the petroleum industry to select and implement financial, production revenue accounting, land & contracts, and geotechnical systems. He manages projects that arise from changes in business requirements, from the need to leverage technology opportunities and from mergers. His specialties include IT strategy, web strategy, and systems project management.

Featured Download

IT World Canada in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Latest Blogs

Senior Contributor Spotlight