50 years ago, on April 2, 1968, the film 2001: A Space Odysseyhad its world premiere at the Uptown Theater in Washington, D.C. Reflecting the mixed reactions to the film, Renata Adler wrote in The New York Times that it was “somewhere between hypnotic and immensely boring.”
The 160-minute film with only 40 minutes of dialogue went on to become “the movie that changed all movies forever,” as the poster for its 50th anniversary re-release modestly proclaims. There is no doubt, however, in its influence on popular culture, specifically on the widespread belief in the possibility of sentient machines possessed by their killer instincts.
HAL 9000 became known, even by many people who did not watch the movie, as the artificially intelligent computer killing one after the other the astronauts on a mission to Jupiter. Hollywood ending required that the one surviving astronaut, Dave Bowman, will triumph over the human-like machine. “Catchphrases like HAL’s chilling ‘I’m sorry, I can’t do that Dave’,” says Paul Whitington, “entered the popular lexicon.”
The Science-fiction writer and futurist Arthur C. Clarke, who worked with the director Stanley Kubrick on the plot for the movie (and later published a novel with the same title), said in an interview:
Of course the key person in the expedition was the computer HAL, who as everyone said is the only human character in the movie. HAL developed slowly. At one time we were going to have a female voice. Athena, I think was a suggested name. I don’t know again when we changed to HAL. I’ve been trying for years to stamp out the legend that HAL was derived from IBM by the transmission of one letter. But, in fact, as I’ve said, in the book, HAL stands for Heuristic Algorithmic, H-A-L. And that means that it can work on a program’s already set up, or it can look around for better solutions and you get the best of both worlds. So, that’s how the name HAL originated.
That both Clarke and Kubrick have denied any intentional reference to IBM may had to do with the fact that IBM was one of the many organizations and individuals consulted while they created the film. They started the four-year process in 1964, the year IBM introduced its own masterpiece, the computer that sealed the company’s domination of the industry for the next quarter of a century.
On April 7, 1964, IBM announced the System 360, the first family of computers spanning the performance range of all existing (and incompatible) IBM computers. Thomas J. Watson Jr., IBM’s CEO at the time, wrote in his autobiography Father, Son, & Co.:
By September 1960, we had eight computers in our sales catalog, plus a number of older, vacuum-tube machines. The internal architecture of each of these computers was quite different, different software and different peripheral equipment, such as printers and disk drives, had to be used with each machine. If a customer’s business grew and he wanted to shift from a small computer to a large one, he had to get all new everything and rewrite all his programs often at great expense. …
[The] new line was named System/360—after the 360 degrees in a circle—because we intended it to encompass every need of every user in the business and the scientific worlds. Fortune magazine christened the project “IBM’s $5,000,000,000 Gamble” and billed it as “the most crucial and portentous—as well as perhaps the riskiest—business judgment of recent times.”… It was the biggest privately financed commercial project ever undertaken. The writer at Fortune pointed out that it was substantially larger than the World War II effort that produced the atom bomb.
And like nuclear energy, the System/360 and all the computers and networks of computers that came after it, could be used for creation or for destruction. It was a tool, and as 2001 depicts in the first part of the movie, tools can be used to help humanity or as a weapon in humanity’s wars.
In Profiles of the Future, published in 1962, Arthur Clarke wrote:
The old idea that Man invented tools is… a misleading half-truth; it would be more accurate to say that tools invented Man. They were very primitive tools… yet they led to us—and to the eventual extinction of the apeman who first wielded them… The tools the apemen invented caused them to evolve into their successor, Homo sapiens. The tool we have invented is our successor. Biological evolution has given way to a far more rapid process—technological evolution. To put it bluntly and brutally, the machine is going to take over.
Talk of the machine taking over has risen to the surface of public discourse over the last 50-plus years each time computer engineers have added yet another “human-like” capability to the tools they create. After IBM’s Watson AI defeated Jeopardy-champion Ken Jennings in 2011, he wrote in Slate:
I understood… why the engineers wanted to beat me so badly: To them, I wasn’t the good guy, playing for the human race. That was Watson’s role, as a symbol and product of human innovation and ingenuity. So my defeat at the hands of a machine has a happy ending, after all. At least until the whole system becomes sentient and figures out the nuclear launch codes…
The fear that machines will figure out the nuclear codes and destroy their creators was called “absurd” by one of 2001’s creators who strongly believed, as we saw above, that they will indeed “take over.” Dismissing the “popular idea” of the malevolent AI killer he went on to co-create a few years later, Clarke wrote in Profiles of the Future:
The popular idea, fostered by comic strips and the cheaper forms of science-fiction, that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent… Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.
According to this materialistic fantasy, which many computer and AI engineers adhere to today, tools create us and will match or surpass human intelligence but will do no harm.
Or maybe not, on both counts: That computers can harm humans on their own and that machines will be as or more intelligent than humans.
Here’s Piers Bizony in Nature:
Certainly, in the film, the surviving astronaut’s final conflict with HAL prefigures a critical problem with today’s artificial-intelligence (AI) systems. How do we optimize them to deliver good outcomes? HAL thinks that the mission to Jupiter is more important than the safety of the spaceship’s crew. Why did no one program that idea out of him? Now, we face similar questions about the automated editorship of our searches and news feeds, and the increasing presence of AI inside semi-autonomous weapons…
Should we watch out for superior “aliens” closer to home, and guard against AI systems one day supplanting us in the evolutionary story yet to unfold? Or does the absence of anything like HAL, even after 50 years, suggest that there is, after all, something fundamental about intelligence that is impossible to replicate inside a machine?