Hinton vs. Murdoch: A tale of two AIs

At Collision 2023, held in Toronto in late June, there was much discussion about artificial intelligence (AI) and more specifically the ChatGPT chatbot, but nothing illustrated the current situation better than separate keynote speeches delivered by Colin Murdoch and Dr. Geoffrey Hinton.

Murdoch, chief business officer of Google DeepMind, an organization Google launched in April when DeepMind and the organization’s Brain team combined forces, said that in the last six months the world has had a “eureka moment” when it comes to AI.

“For those of you who don’t know, our mission is to advance science and benefit humanity,” he said. “And when we think about some of the world’s biggest challenges, climate change is right up there. It’s a huge global challenge. And we’ve got to bring to bear the full force of humanity’s creativity and expertise to help solve it, and I believe AI can play a really important part.”

Murdoch added that Google DeepMind researchers are using AI to “help forecast the weather more accurately, and in a more timely manner, to help companies and communities around the world better respond to the more extreme weather conditions and the devastating impact that can have.”

According to a company blog entitled Using AI to fight climate change, “today’s computing infrastructure, including AI itself, is energy-intensive. To help solve some of these issues, we’ve been developing AI that can enhance existing systems, including optimizing industrial cooling and more efficient computer systems.

“Given our energy grids are not yet running on clean energy, it’s important we use our resources as efficiently as possible while we work on the transition to renewables. Accelerating the global transition to renewable energy sources can also greatly reduce carbon emissions.”

In his keynote, Murdoch said “I know there are bigger breakthroughs on the horizon and that their impact will be enormous. But it’s really important that we do it in a way that is safe, ethical and inclusive and build a positive future for everyone.”

Hinton, meanwhile, who spoke with Nick Thompson, the CEO of the Atlantic, in front of a packed audience during a Q&A session on Centre Stage at the Enercare Centre, was far less positive.

“To emphasize, we’re entering a period of huge uncertainty, nobody really knows what’s going to happen,” he said. “And people whose opinion I respect have very different beliefs from me.

“Yann LeCun thinks everything’s going to be fine. They (AI chatbots) are just going to help us, it’s all going to be wonderful. But I think we have to take seriously the possibility that, if they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control. And if they do that, we’re in trouble.”

Hinton, who along with Montreal-based Yoshua Bengio and LeCun are known as the “godfathers of AI” after winning the coveted Turing Award in 2018, and who resigned his post at Google over concerns about AI, was asked the following by Thompson: “If an AI chatbot has been built by good humans for good purposes, it has been trained on good books and good text, then it will have a bias towards good in the future. Do you believe that or not?”

Hinton replied that “AI trained by good people will have a bias towards good, AI trained by bad people such as Putin or somebody like that will have a bias towards bad. We know they’re going to make battle robots. They’re busy doing it in many different defense departments. They’re not going to be necessarily be good, since their primary purpose is going to be to kill people.

“Even if the AI isn’t super intelligent, if defense departments use it for making battle robots it’s going to be very nasty, scary stuff. And it’s going to be, even if it’s not super intelligent, and even if it doesn’t have its own intentions, it just does what Putin tells it to. It’s going to make it much easier, for example, for rich countries to invade poor countries.

“At present there is a barrier to invading poor countries willy-nilly, because you get dead citizens coming home. Instead, if they are just dead battle robots, that is great, the military industrial complex would love that.”

Hinton added at the end of the session that he does not have a plan for how to make AI more good than bad, but did say, “I think it’s great that it’s being developed, because we didn’t get to mention the huge numbers of good uses of it such as in medicine and climate change, and so on. I think progress in AI is inevitable and it’s probably good, but we seriously ought to worry about mitigating all of the bad side effects of it, and worry about the existential threats.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Paul Barker
Paul Barker
Paul Barker is the founder of PBC Communications, an independent writing firm that specializes in freelance journalism. His work has appeared in a number of technology magazines and online with the subject matter ranging from cybersecurity issues and the evolving world of edge computing to information management and artificial intelligence advances.

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now