So the year 2001 – the year that was to have ushered in HAL – has come and gone. The thinking, talking, seeing and feeling computer from Author C. Clarke’s and Stanley Kubrik’s visionary book and movie 2001: A Space Odyssey may not exist yet, but scientists are working towards it.
Scientists in places such as the MIT Artificial Intelligence Lab are working hard at building the various pieces that made HAL such a compelling character and fascinating prospect. One such endeavour, for instance – MIT’s Sociable Machines Project – is exploring the possibility of creating computers capable of reading, mimicking and perhaps even one day experiencing emotional responses. The thinking behind the project is that for true intelligence, a being has to be able to experience emotions, and that computers capable of understanding emotions would be easier to deal with. As Dave put it in 2001, HAL was programmed to act like he had genuine emotions so that it would be easier for the crew to talk to him.
This project, as do all of the others at MIT’s AI lab, presents some intriguing possibilities. But before we actually succeed in creating an HAL-like computer, we need to step back and ask ourselves if this is actually what we want to do. Do we really want to create computers capable of imitating emotions, for instance? I for one believe that my Windows-based computer already behaves irrationally enough without someone deliberately giving it the ability to imitate happiness, sadness and anger. If you think HAL had emotional issues to work through, imagine a Microsoft OS with emotions. Not a very inviting thought, is it?
Often, we create technologies simply because we can, and we give little or no thought to what the potential ramifications of those technologies might be. We stand on the verge of some wondrous scientific discoveries today, such as cloning, nanotechnology and artificial intelligence, to name but a few. These discoveries could change the fabric of our society and the very nature of our existence, but there is almost no public debate about what they could mean and where they could lead us. This could be a potentially fatal oversight.
Unless we think more about the technologies that we want to create, we may be doomed to repeat Frankenstein’s mistake (to borrow an analogy from another science fiction classic). Without giving a thought to the consequences, Frankenstein created a monster simply because he could. When he completed his experiment, he was horrified by his creation and abandoned the creature to do whatever it wanted to in society without providing it any guidance.
The results were deadly.