By Alan Duncan, Gartner, Inc.

Given the pervasive nature of artificial intelligence (AI), the consequences of getting AI right or wrong are potentially profound. When used incorrectly, AI can unintentionally reinforce harmful biases, increase polarization and result in other damaging consequences.

With the excitement for and hype surrounding the possibilities of AI, it is easy to focus on the technology and coding disciplines — what might be considered the ‘artificial’ aspects.

However, what could be thought of as the ‘intelligent’ aspects of a digitally connected world don’t function, actually they don’t exist, without data. While familiar with the people, process and technology capabilities of business models, most CIOs and IT professionals do not ‘speak data’ fluently.

To use AI accurately, companies need to build the case for data literacy as a new core competency for both creators and consumers of AI. At the upcoming Gartner IT Symposium/Xpo in Toronto, Gartner will advise CIOs responsible for enabling AI initiatives to follow three steps: First, build AI right, then use AI right and ultimately keep AI right.

Build AI right

To “build AI right,” it is key to first establish the basic vocabulary of AI — a technical dialect of how people “speak data.” At the very least, CIOs should determine the primary terms used when describing an AI system or solution, including the purpose or reason that the AI solution is being developed, as well as other key terms, such as the types of data used and gathered from the solution.

Use AI right

The information language barrier can exist locally or systemically, regardless of program scope or organizational maturity. Addressing it requires a mindset shift as well as deliberate acknowledgment and intervention to course correct. To make data literacy more explicit, CIOs should develop a data literacy program.

  • Identify fluent and native speakers who speak data naturally and effortlessly. Fluent speakers should be adept at describing contextualized use cases and outcomes, the analytical techniques applied to them, and the underlying data sources, entities and key attributes involved.
  • Identify skilled translators. Classic translators are often enterprise data or information architects, data scientists, information stewards or related program managers.
  • Identify areas where communication barriers are inhibiting the effectiveness of data and analytics initiatives. Pay particular attention to business-IT gaps, data-analytics gaps and veteran-rookie gaps.
  • Actively listen for business outcomes not clearly articulated in terms of explicit action. What business moments are being enabled with enhanced data and analytics capabilities? What operational decisions are being improved?
  • Identify key stakeholders requiring specialized translations. To assess data literacy levels, ask key stakeholders to articulate the value of data as a strategic asset in terms of business outcomes, including enhanced business moments, monetization and risk mitigation.
  • Identify and maintain a list of words and phrases. Engage the data and analytics team in crafting ways to better articulate these phrases.

Keep AI right

Not even the most successful companies can afford to think they are immune to ethical mishaps. Extensive and explicit discussion is needed to distinguish between the types of ethical questions and dilemmas one can face versus the actual ethical position one can take.

  • Take a step back and absorb digital ethics and digital connectivism as a philosophy for the improvement of digital business — and digital society more generally.
  • Actively look for ethical case studies relating to the use of data in AI, as the ethical questions that confront you are often not new. Opportunities include competitive differentiation and a superior value proposition; dangers include reputational risk, regulatory issues and financial losses.
  • Use AI algorithms and data exchange as an enabler of digital interactions, and a way to enable stakeholders to participate in an ecosystem rather than as specific process controls. Encourage everyone contributing their data within the AI environment to be active participants in a mutually beneficial ecosystem.

Alan D. Duncan is a research vice president at Gartner where he focuses on data and analytics strategy.