There are interesting parallels between one of this week’s milestones in the history of technology and the current excitement and anxiety about artificial intelligence (AI).
On February 10, 1996, IBM’s Deep Blue became the first machine to win a chess game against a reigning world champion, Garry Kasparov. Kasparov won three and drew two of the following five games, defeating Deep Blue by a score of 4–2. In May 1997, an upgraded version of Deep Blue won the six-game rematch 3½–2½ to become the first computer to defeat a reigning world champion in a match under standard chess tournament time controls.
Deep Blue was an example of so-called “artificial intelligence” achieved through “brutforce,” the super-human calculating speed that has been the hallmark of digital computers since they were invented in the 1940s. Deep Blue was a specialized, purpose-built computer, the fastest to face a chess world champion, capable of examining 200 million moves per second, or 50 billion positions, in the three minutes allocated for a single move in a chess game.
To many observers, this was another milestone in man’s quest to build a machine in his own image and another indicator that it’s just a matter of time before we create a self-conscious machine complex enough to mimic the brain and display human-like intelligence or even super-intelligence.
An example of such “the mind is a meat machine” (to quote Marvin Minsky) philosophy is Charles Krauthammer’s “Be Afraid” in the Weekly Standard, May 26, 1997. To Krauthammer, Deep Blue’s win in the 1996 match was due to “brute force” calculation, which is not artificial intelligence, he says, just faster calculation of a much wider range of possible tactical moves.
But one specific move in Game 2 of the 1997 match, a game that Kasparov based not on tactics, but on strategy (where human players have a great advantage over machines), was “the lightning flash that shows us the terrors to come.” Krauthammer continues:
What was new about Game Two… was that the machine played like a human. Grandmaster observers said that had they not known who was playing they would have imagined that Kasparov was playing one of the great human players, maybe even himself. Machines are not supposed to play this way… To the amazement of all, not least Kasparov, in this game drained of tactics, Deep Blue won. Brilliantly. Creatively. Humanly. It played with — forgive me — nuance and subtlety.
Fast forward to March 2016, to Cade Metz writing in Wired on Go champion Lee Sedol’s loss to AlphaGo at the Google DeepMind Challenge Match. In “The AI Behind AlphaGo Can Teach Us About Being Human,” Metz reported on yet another earth-shattering artificial-intelligence-becoming-human-intelligence move:
Move 37 showed that AlphaGo wasn’t just regurgitating years of programming or cranking through a brute-force predictive algorithm. It was the moment AlphaGo proved it understands, or at least appears to mimic understanding in a way that is indistinguishable from the real thing. From where Lee sat, AlphaGo displayed what Go players might describe as intuition, the ability to play a beautiful game not just like a person but in a way no person could.
AlphaGo used 1,920 Central Processing Units (CPU) and 280 Graphics Processing Units (GPU, according to The Economist, and possibly additional proprietary Tensor Processing Units, for a lot of hardware power, plus brute force statistical analysis software known as Deep Neural Networks, or more popularly as Deep Learning.
Still, Google’s programmers have not dissuaded anyone from believing they are creating human-like machines and often promoted the idea (the only Google exception I know of is Peter Norvig, but he is neither a member of the Google Brain nor of the Google DeepMind teams, Google’s AI avant-garde).
IBM’s programmers, in contrast, were more modest. Krauthammer quotes Joe Hoane, one of Deep Blue’s programmers, answering the question “How much of your work was devoted specifically to artificial intelligence in emulating human thought?” Hoane’s answer: “No effort was devoted to [that]. It is not an artificial intelligence project in any way. It is a project in — we play chess through sheer speed of calculation and we just shift through the possibilities and we just pick one line.”
So the earth-shattering moves may have been just a bug in the software. But that explanation escaped observers, then and now, preferring to believe that humans can create intelligent machines (“giant brains” as they were called in the early days of very fast calculators) because the only difference between humans and machines is the degree of complexity, the sheer number of human or artificial neurons firing. Here’s Krauthammer:
You build a machine that does nothing but calculation and it crosses over and creates poetry. This is alchemy. You build a device with enough number-crunching algorithmic power and speed—and, lo, quantity becomes quality, tactics becomes strategy, calculation becomes intuition… After all, how do humans get intuition and thought and feel? Unless you believe in some metaphysical homunculus hovering over (in?) the brain directing its bits and pieces, you must attribute our strategic, holistic mental abilities to the incredibly complex firing of neurons in the brain.
We are all materialists now. Or almost all of us. Read here and (especially) here for a different take.
If you are not interested in philosophical debates (and prefer to ignore the fact that the dominant materialist paradigm affects—through government policies, for example—many aspects of your life), at least read Tom Simonite excellent Wired article “AI Beat Humans at Reading! Maybe not” in which he shows how exaggerated are recent various claims for AI “breakthroughs.”
Beware of fake AI news and be less afraid.