As a senior executive or CIO, how can you assure yourself that artificial intelligence (AI) or Machine Learning (ML)-derived recommendations are reasonable and flow logically from the project work that has been performed?
While you want to be supportive and encouraging of your team’s work, you don’t want to be misled inadvertently, and you want to confirm that the data science team hasn’t misled itself.
“The quality of the features or variables in your data has a major impact on the quality of the recommendations you can expect,” says Richard Henderson, EMEA Team Lead Solution Architect, at TigerGraph. “Our work in fraud detection again demonstrated the importance of data quality for variables used in our algorithms.”
Here are some high-level questions that you can ask the team about the AI/ML variables. They’re designed to raise everyone’s assurance that the AI/ML recommendations are sound and can be confidently implemented even though you and everyone else know you’re not an expert. Start by selecting one question that you’re most concerned about and you’re most comfortable asking.
Confidence in the AI/ML variables
Some variables or features are always more important in their impact on AI/ML model outputs than others. The confidence you can have in AI/ML-derived recommendations is highly dependent on the project team having a good understanding of which variables are most important. Here are some related questions that will illuminate the confidence in the AI/ML model variables:
- Which variables is your model most sensitive to?
- How did you ensure that the data quality for the most critical variables is sufficient?
- How similar or different are the variable definitions in your model compared to equivalent variables we use to report our operations’ performance?
- To what extent did you revise the model as you gained more understanding about which variables are the most important variables?
- How did you avoid using variables that appear to be quite different but actually represent almost the same feature and thereby introduce bias into your model?
Here’s how to evaluate the answers that you’ll receive to these questions from your data science team:
- If you receive blank stares, that means your question’s topic has not been addressed and needs more attention before the recommendations should be accepted. It will be necessary to add missing skills to the team or even replace the entire team.
- If you receive a lengthy answer filled with a lot of data science jargon or techno-babble, the topic has not been sufficiently addressed, or worse, your team may be missing critical skills required to deliver confident recommendations. Your confidence in the recommendations should decrease or even evaporate.
- If you receive a thoughtful answer that references uncertainties and risks associated with the recommendations, your confidence in the work should increase.
- If you receive a response that describes potential unanticipated consequences, your confidence in the recommendations should increase.
- If the answers you receive are supported by additional slides containing relevant numbers and charts, your confidence in the team should increase significantly.
- If the project team acknowledges that your question’s topic should receive more attention, your confidence in the team should increase. It will likely be necessary to allocate more resources, such as external data science consultants, to address the deficiency.
For a summary discussion of the topics you should consider as you seek to assure yourself that AI/ML recommendations are sound, please read this article: Skeptical about AI-derived recommendations? Here are some tips to get you started.
What ideas can you contribute to help senior executives assure themselves that the AI/ML-derived recommendations are reasonable and flow logically from the project work performed? Let us know in the comments below.