As a senior executive or CIO, how can you assure yourself that artificial intelligence (AI) or machine learning (ML)-derived recommendations are reasonable and flow logically from the project work that has been performed?
While you want to be supportive and encouraging of your team’s work, you don’t want to be misled inadvertently and you want to confirm that the data science team hasn’t misled itself.
“Many organizations experience a communication disconnect between senior executives and the data scientists and software engineers working on AI/ML projects,” says Amy Hodler, Director, Graph Analytics and AI programs at Neo4j, a leading vendor of graph database software. “Data scientists like to describe the details of how their models function while management wants to discuss the level of confidence associated with the recommendations and business implications.”
Here are some high-level questions that you can ask the team. They’re designed to raise everyone’s assurance that the AI/ML recommendations are sound and can be confidently implemented even though you and everyone else know you’re not an expert. Start by selecting one question that you’re most concerned about and comfortable asking.
The confidence you can have in AI/ML-derived recommendations is highly dependent on the data quality found in the data sources used by the project team. Here are some related questions that will illuminate the actual data quality:
- How do we know that the data sources you employed are sufficient in number to support the model comprehensively?
- How do we know that the quality and volume of the data you employed is sufficient to support the model comprehensively?
- How did you raise the data quality and volume sufficiently to ensure that the uncertainty associated with the model is small or modest?
- How did your domain experts determine that the data context is rich enough to support the model and avoid misunderstandings of data meanings?
- How did you determine that the training data for your model was rich enough?
AI/ML model quality
AI/ML models vary widely in scope and complexity. The confidence you can have in AI/ML-derived recommendations is highly dependent on the adequacy of the model. Here are some related questions that will illuminate the AI/ML model quality:
- Is the model constructed an entirely new model or was it an updated version of a previously successful model?
- How did you assure yourself that the project team members have sufficient AI/ML knowledge for the type of model chosen and then built?
- How do we know that the model design is appropriate for the business situation we’re trying to model?
- What steps did you take to address the bias you likely identified in your model?
- Did any of the model outputs surprise you?
Data and AI/ML model congruity
The selected data and its AI/ML model must work well together. The confidence you can have in AI/ML-derived recommendations is highly dependent on the congruity between the two. Here are some related questions that will illuminate data and AI/ML model congruity:
- How do we know that the data you’re using to support the model is the correct or optimum data for the model?
- Considering the wide variety of data sources this project employed, how do we know that the subject-matter expertise with the various data sources was sufficient within the project team?
- Can you explain the model outputs considering the data that you used?
The goal of the AI/ML project, to be useful, must advance an aspect of the corporate strategy. The importance or relevance of the AI/ML-derived recommendations is highly dependent on the alignment between the project goal and the corporate strategy. Here are some related questions that will illuminate the alignment:
- Please describe how you ensured that the goal of the model is aligned with our business goals or pressing issues.
- Please describe how you ensured that the project goal advances an element of the published corporate strategy.
- Do the AI/ML-derived recommendations suggest that some revisions to the corporate strategy should be considered?
- Could you elaborate on the risks, particularly public risk, that the organization will face if we implement your recommendations?
Here’s how to evaluate the answers that you’ll receive to these questions from your data science team:
- If you receive blank stares, that means your question’s topic has not been addressed and needs more attention before the recommendations should be accepted. It will be necessary to add missing skills to the team or even replace the entire team.
- If you receive a lengthy answer filled with a lot of data science jargon or techno-babble, the topic has not been sufficiently addressed, or worse, your team may be missing critical skills required to deliver confident recommendations. Your confidence in the recommendations should decrease or even evaporate.
- If you receive a thoughtful answer that references uncertainties and risks associated with the recommendations, your confidence in the work should increase.
- If you receive a response that describes potential unanticipated consequences, your confidence in the recommendations should increase.
- If the answers you receive are supported by additional slides containing relevant numbers and charts, your confidence in the team should increase significantly.
- If the project team acknowledges that your question’s topic should receive more attention, your confidence in the team should increase. It will likely be necessary to allocate more resources to address the deficiency.
For a synopsis of current trends in AI, please read What’s New In Gartner’s Hype Cycle For AI, 2020.
What ideas can you contribute to help senior executives assure themselves that the AI/ML-derived recommendations are reasonable and flow logically from the project work performed? Let us know in the comments below.