TORONTO – With artificial intelligence playing an ever-greater role in steering digital transformation, the global lead behind one of the world’s best-known AI systems wants both developers and the companies adopting their work to ask themselves what platform they’re using to steer AI.
The answer suggested by IBM Watson Group global lead Neil Sahota during his Tuesday appearance at Toronto’s AiDecentralized conference, where ITWC was a media sponsor: the blockchain, the network of distributed databases, still best known for powering Bitcoin, that records, validates, and organizes the information created by users performing online actions.
“If we’re going to have machines making recommendations, suggestions, possibly decisions, where does that control live?” Sahota asked. “Is it with the corporations? Governments? Or is it decentralized?”
“One of the big things I like to say about AI is that whoever controls the training controls the universe,” he continued. “If you’ve got five organizations trying to do the exact same thing by building their own AI solutions, they’ll come up with slightly different answers.”
That’s where Sahota believes the blockchain could come in. Because the information it stores – including, theoretically, AI systems – is distributed across networks, blockchain networks are often considered the most secure computing platform available.
“I wish I could say I have a simple solution for you, but I don’t,” Sahota told the audience, acknowledging that while he believed the blockchain could support AI, “I don’t quite know how it would work yet.”
But it’s easy to imagine how, in theory, it could work: AI developers from around the world, spurred by the question of what a “universal” AI system should look like, using the blockchain as their platform of choice.
How we’re wired to make the answer difficult
As it stands, artificial intelligence enjoys a reputation as a dispassionate, unbiased outsider, Sahota said, but that reputation is wrong: AI can be as biased as its programmers, and as difficult to trust to find the correct solution to a problem a human placed in a similar situation.
“This brings up the question of how we train AI,” Sahota said. “It’s an open-ended question because we have different ethos, different morals, different societal norms.”
Moreover, despite the reality of machines being more reliable than humans at many tasks, the majority of us seem wired to trust humans more than machines, despite the former’s higher rates of failure and the latter’s frequently-strict margins of error.
“I think it’s because of books, movies, TV we’ve grown up – humans always beat the machines,” Sahota told IT World Canada. “There’s always something we click together that machines can’t see, and in some cases that’s definitely true… I think from a creative standpoint, people are going to do a much better job with high-level tasks such as figuring out what the next really great product is going to be, for instance.”
But when it comes to fact-based tasks – such as, to pick a random example, diagnosing patients – he thinks machines have the upper hand.
“Machines have way better recollection than humans,” he told IT World Canada. “They remember everything they read, see, hear, experience.”
During his presentation, Sahota illustrated his point by asking the audience whether they would place greater trust in a human doctor’s diagnosis than an AI platform’s.
The answer was unanimous: The human.
Yet mathematically speaking, one U.S. study found that doctors misdiagnose some 20 per cent of patients with serious illnesses, while at least one Watson-powered tool used for diagnosing cancer has a 90 per cent success rate.
“We believe there is something intrinsic about us as people… that lets us do some things we do better than machines, but there are some things that machines do better than people,” Sahota said.
The solution involves both – if they’re programmed right
That insight, however, led back to a variation of his original question: How do we program AI to reach what the majority of us will consider an acceptable margin of error?
And the margin of error humans will accept from machines is very, very low: witness ride hailing giant Uber’s announcement on Tuesday that it would be shutting down its Arizona-based self-driving vehicle service after nearly two years of testing, in response to a pedestrian being killed.
“I think this something we should return to over the next couple of years,” Sahota said. “I think as a world society, it’s something we’re going to have to try to figure out.”
The answer will also involve teaching humans to trust machines, he said – and he truly believes that explaining the benefits of blockchain is an excellent way to do so.