Do we want HAL to be our future?

So the year 2001 – the year that was to have ushered in HAL – has come and gone. The thinking, talking, seeing and feeling computer from Author C. Clarke’s and Stanley Kubrik’s visionary book and movie 2001: A Space Odyssey may not exist yet, but scientists are working towards it.

Scientists in places such as the MIT Artificial Intelligence Lab are working hard at building the various pieces that made HAL such a compelling character and fascinating prospect. One such endeavour, for instance – MIT’s Sociable Machines Project – is exploring the possibility of creating computers capable of reading, mimicking and perhaps even one day experiencing emotional responses. The thinking behind the project is that for true intelligence, a being has to be able to experience emotions, and that computers capable of understanding emotions would be easier to deal with. As Dave put it in 2001, HAL was programmed to act like he had genuine emotions so that it would be easier for the crew to talk to him.

This project, as do all of the others at MIT’s AI lab, presents some intriguing possibilities. But before we actually succeed in creating an HAL-like computer, we need to step back and ask ourselves if this is actually what we want to do. Do we really want to create computers capable of imitating emotions, for instance? I for one believe that my Windows-based computer already behaves irrationally enough without someone deliberately giving it the ability to imitate happiness, sadness and anger. If you think HAL had emotional issues to work through, imagine a Microsoft OS with emotions. Not a very inviting thought, is it?

Often, we create technologies simply because we can, and we give little or no thought to what the potential ramifications of those technologies might be. We stand on the verge of some wondrous scientific discoveries today, such as cloning, nanotechnology and artificial intelligence, to name but a few. These discoveries could change the fabric of our society and the very nature of our existence, but there is almost no public debate about what they could mean and where they could lead us. This could be a potentially fatal oversight.

Unless we think more about the technologies that we want to create, we may be doomed to repeat Frankenstein’s mistake (to borrow an analogy from another science fiction classic). Without giving a thought to the consequences, Frankenstein created a monster simply because he could. When he completed his experiment, he was horrified by his creation and abandoned the creature to do whatever it wanted to in society without providing it any guidance.

The results were deadly.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now