A new study by MIT Sloan Management Review and Boston Consulting Group has found that third-party AI tools are responsible for over 55 per cent of AI-related failures in organizations.
These failures can have serious consequences, including reputational damage, financial losses, loss of consumer trust, and even litigation.
The study surveyed 1,240 respondents across 87 countries, and found that 78 per cent of companies use third-party AI tools. Of these organizations, 53 per cent use third-party tools exclusively, without any in-house AI tech. However, despite the widespread use of third-party AI tools, only 20 per cent of companies have evaluated the substantial risks they pose.
The researchers concluded that responsible AI (RAI) is harder to achieve when teams engage vendors without oversight, and a more thorough evaluation of third-party tools is necessary.
“Enterprises have not fully adapted their third-party risk management programs to the AI context or challenges of safely deploying complex systems like generative AI products,” Philip Dawson, head of AI policy at Armilla AI, told MIT researchers. “Many do not subject AI vendors or their products to the kinds of assessment undertaken for cybersecurity, leaving them blind to the risks of deploying third-party AI solutions.”
The researchers recommend that organizations implement thorough risk assessment strategies for third-party AI tools, such as vendor audits, internal reviews, and compliance with industry standards. They also believe that organizations should prioritize RAI from regulatory departments up to the CEO.
The research found that organizations with a CEO who is involved in RAI are 58 per cent more likely to report business benefits than those with a CEO who is not directly involved in RAI. They are also almost twice as likely to invest in RAI.
The sources for this piece include an article in ZDNET.