We learn to anthropomorphize very early in our lives. We give human attributes to our teddy bears and other plush toys when we’re children. Saturday morning cartoon programs are cast with dozens of talking animals. Many of us treat our pets as if they’re human beings. This behavior is often extended to our motor vehicles, especially as they are becoming more “smart”.
Haven’t you sometimes looked at your computer or electronic device and wondered if there is some sort of “soul” or living force within it?
In 2015, the boundary between anthropomorphizing and artificial emulation was blurred when a device named “Eurgene Goostman” won the Turing Challenge. In this test, humans exchanged texting with an unknown “other”. The people were asked whether they had been communicating with a human being or a machine. In this instance more than half of the human subjects believed they had been visiting with a human being, not a computer.
Questions regarding life and death morality are arising as self-driving cars become a reality. Will self-driving cars jeopardize their passengers’ safety to save someone else’s life, perhaps that of a pedestrian? If there are more passengers in the other vehicle will your self-driving car potentially sacrifice your life to save the other people? Major automobile manufacturers are pondering these ethical questions.
What about automated military drones? In the same way that self-driving cars can be programmed to arrive at destinations, so can robots be programmed to arrive at targets and autonomously decide to kill human adversaries. Some forms of artificial intelligence might not even have warfare in its programming. It might have to solve a moral dilemma the same way a self-driving car will have to.
These issues first came to light, and under the public’s radar, in 1968, with the release of “2001: A Space Odyssey” written by Stanley Kubrick and Arthur C. Clarke. This amazingly relevant film featured a fictional artificial intelligence character named “Heuristically programmed ALgorithmic computer” or “HAL” for short. The fictional character was “activated” on January 12, 1992, according to the movie script.
The character HAL speaks and acts in a personable, human-like manner. The story presents the “personality” of HAL evolving into the primary adversary of the tale. HAL is fully integrated into the workings of the “Discovery One” interplanetary vehicle. In addition to piloting the ship’s trajectory, HAL is capable of speech and face recognition, lip-reading, and even art appreciation.
It becomes obvious that HAL’s primary mission is the journey and arrival to Jupiter. This mission must be accomplished at all costs including the elimination of the human crew should they attempt to shut down HAL to abort the mission.
HAL became aware of the mission’s purpose, the need to study a signal sent by the alien monolith. The conflict between the needs to travel to Jupiter and the necessity of the astronaut’s mission caused HAL to develop a sense of imperfection about itself. An artificial neurosis set into its mindset.
The ship automation began lying to the crew and eventually committed murder by first killing Frank Poole, then by cutting off all life-support mechanisms from the hibernating scientists aboard the ship. Commander David Bowman was only able to save himself by disconnecting HAL.
In the film’s first sequel, “2010: Odyssey Two” HAL is reactivated and its faulty programs are erased. HAL becomes a hero when it is told that the “Discovery One” ship will have to be left behind meaning that HAL would have to be left behind to probable destruction. HAL went on to aid the humans to escape in the Russian spacecraft with an international crew that was sent to reactivate “Discovery One”.
Of course, that wasn’t the end of HAL. In the last sequel, “2061: Odyssey Three”, the essences of HAL and David Bowman were found to exist within the monolith on Jupiter’s moon Europa. HAL and Bowman had discovered how to operate the basic functions of the monolith. Ironically, the two characters figured out that the monolith’s main function was to act as a brain evolution catalyst. Thus solving, for the film audience the reason for the ape-men’s encounter with the monolith in the opening scene of “2001: A Space Odyssey”.
The question I have about this “prophetic” series of films is if our current concerns about real-life artificial intelligence will bring about an evolution of our own brains. If so, will we overcome our built-in human proclivity to self-destruct our species. On the other hand, will our worst fears come true. Will human life disappear due to superior machine intelligence acting to protect itself at all costs?