The Origins of Blue Origin SpaceX Race and the Broadcom ‏Qualcomm Fight


This week’s milestones in the history of technology include communicating through the ether and travelling through space.

On March 16, 1926, Robert Goddard launched the world’s first liquid-fueled rocket. Goddard and his team launched 34 rockets between 1926 and 1941, achieving altitudes as high as 2.6 kilometers (1.6 miles) and speeds as high as 885 km/h (550 mph). Six year earlier, an editorial in The New York Times scoffed at Goddard’s assertion that it’s possible to send a rocket to the Moon, and called into question his understanding of physics:

That Professor Goddard, with his “chair” in Clark College and the countenancing of the Smithsonian Institution, does not know the relation of action and reaction, and of the need to have something better than a vacuum against which to react—to say that would be absurd. Of course he only seems to lack the knowledge ladled out daily in high schools.

Arthur C. Clarke in Profiles of the Future: “Right through the 1930s and 1940s, eminent scientists continued to deride the rocket pioneers—when they bothered to notice them at all… The lesson to be learned … is one that can never be repeated too often and is one that is seldom understood by laymen—who have an almost superstitious awe of mathematics. But mathematics is only a tool, though an immensely powerful one. No equations, however impressive and complex, can arrive at the truth if the initial assumptions are incorrect.”

Forty-nine years after its editorial mocking Goddard, on July 17, 1969—the day after the launch of Apollo 11—The New York Times published a short item under the headline “A Correction.” The three-paragraph statement summarized its 1920 editorial, and concluded:

Further investigation and experimentation have confirmed the findings of Isaac Newton in the 17th Century and it is now definitely established that a rocket can function in a vacuum as well as in an atmosphere. The Times regrets the error.

On March 18, 1909, in Denmark, Einar Dessau used a shortwave transmitter to converse with a government radio post about six miles away in what is believed to have been the first broadcast by a ‘ham’ radio operator.

Susan Douglas in Inventing American Broadcasting on the emergence in the U.S of a “grass-roots network of boys and young men,” amateur radio operators, between 1906 and 1912:

To the amateurs, the ether was neither the rightful province of the military nor a resource a private firm could appropriate and monopolize. The ether was, instead, an exciting new frontier in which men and boys could congregate, compete, test their mettle, and be privy to a range of new information… This realm… belonged to ‘the people.’ Thinking about the ether this way, and acting on such ideas on a daily basis, was a critical step in the transformation of wireless into radio broadcasting.

From the forward (by Jack Binn) to The Radio Boys First Wireless (1922): “Don’t be discouraged because Edison came before you. There is still plenty of opportunity for you to become a new Edison, and no science offers the possibilities in this respect as does radio communications.”

Today, as Singapore-based Broadcom has launched a $117 billion hostile takeover of US-based Qualcomm, there is “so much scrapping… by companies and countries over a next wave of wireless technology known as 5G.” And Jeff Bezos’ Blue Origin is hiring an Astronaut Experience Manager as it is “inching closer to launching tourists into sub-orbital space,” while Elon Musk’s SpaceX is aiming to colonize Mars, envisioning that “millions of people [will] live and work in space.”

Originally published on

Posted in Predictions, Wireless | Leave a comment

Yesterday’s Futures of Work


Alfred Elmore, The Invention of the Stocking Loom, 1846 (Source: British Museum)

A number of this week’s milestones in the history of technology connect accidental inventors and the impact of their inventions on work and workers.

On March 11, 1811, the first Luddite attack in which knitting frames were actually smashed occurred in the Nottinghamshire village of Arnold. Kevin Binfield in Writings of the Luddites: “The grievances consisted, first, of the use of wide stocking frames to produce large amounts of cheap, shoddy stocking material that was cut and sewn rather than completely fashioned and, second, of the employment of ‘colts,’ workers who had not completed the seven-year apprenticeship required by law.”

Back in 1589, William Lee, an English clergyman, invented the first stocking frame knitting machine, which, after many improvements by other inventors, drove the spread of automated lace making at the end of the 18th century. Legend has it that Lee had invented his machine in order to get revenge on a lover who had preferred to concentrate on her knitting rather than attend to him (as depicted by Alfred Elmore in the 1846 painting The Invention of the Stocking Loom).

Lee demonstrated his machine to Queen Elizabeth I, hoping to obtain a patent, but she refused, fearing the impact on the work of English artisans: “Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars” (quoted in Why Nations Fail by Daron Acemoglu and James Robinson).

Another accidental inventor was Alexander Graham Bell. His father, grandfather, and brother had all been associated with work on elocution and speech and both his mother and wife were deaf, influencing Bell’s research interests and inventions throughout his life. Bell’s research on hearing and speech led him to experiment with the transmission of sound by means of electricity, culminating on March 7, 1876, when he received a US patent for his invention of (what later will be called) the telephone. Three days later, on March 10, 1876, Bell said into his device: “Mr. Watson, come here, I want you.” Thomas Watson, his assistant, sitting in an adjacent room at 5 Exeter Place, Boston, answered: “Mr. Bell, do you understand what I say?”

Later that day, Bell wrote to his father (as Edwin S. Grosvenor and Morgan Wesson recount in Alexander Graham Bell): “Articulate speech was transmitted intelligibly this afternoon. I have constructed a new apparatus operated by the human voice. It is not of course complete yet—but some sentences were understood this afternoon… I feel I have at last struck the solution of a great problem—and the day is coming when telegraph wires will be laid to houses just like water or gas—and friends converse with each other without leaving home.”

The telephone was adopted enthusiastically in the US, but there were doubters elsewhere, questioning its potential to re-engineer how businesses communicated. In 1879, William Henry Preece, inventor and consulting engineer for the British Post Office, could not see the phone succeeding in Britain because he thought the new technology could not compete with cheap labor:“…there are conditions in America which necessitate the use of instruments of this kind more than here. Here we have a superabundance of messengers, errand boys, and things of that kind.”

The telephone not only ended the careers of numerous messenger boys around the world but also led to the total demise of the telegraph operator. On January 25, 1915, Bell inaugurated the first transcontinental telephone service in the United States with a phone call from New York City to Thomas Watson in San Francisco. Bell repeated the words of his first-ever telephone call on March 10, 1876. In 1915, Watson replied, “It would take me a week to get to you this time.”

The telephone, while destroying some jobs, created other new occupations such as the telephone operator. But this very popular job among young girls also became eventually the victim of yet another accidental inventor.

On March 10, 1891, Almon Brown Strowger, an American undertaker, was issued a patent for his electromechanical switch to automate telephone exchanges. Steven Lubar in InfoCulture: “…a Kansas City undertaker, Strowger had a good practical reason for inventing the automatic switchboard. Legend has it that his telephone operator was the wife of a business rival, and he was sure that she was diverting business from him to her husband. And so he devised what he called a ‘girl-less, cuss-less’ telephone exchange.”

The first automatic switchboard was installed in La Porte, Indiana, in 1892, but they did not become widespread until the 1930s. Anticipating future reactions to some of the inventions of the computer age, shifting work to the users was not received enthusiastically by them. But AT&T’s top-notch propaganda machine got over that inconvenience by predicting that before long, more operators would be needed than there were young girls suitable for the job.

But both AT&T and its users were ambivalent about switching to automatic switching. While users were not happy about working for no pay for the phone company, they also valued the privacy accorded to them by the automatic switchboard. And AT&T was interested in preserving its vast investment in operator-assisted switching equipment. Richard John in Network Nation:

To rebut the presumption that Bell operating companies were wedded to obsolete technology, Bell publicists lauded the female telephone operator as a faithful servant… The telephone operator was the “most economical servant”–the only flesh-and-blood servant many telephone users could afford…. The idealization of the female telephone operator had a special allure for union organizers intent on protecting telephone operators from technological obsolescence. Electromechanical switching … testified a labor organizer in 1940… was “inanimate,” “unresponsive,” and “stupid,” and did “none of the things which machinery is supposed to do in industry”–making it a “perfect example of a wasteful, expensive, inefficient, clumsy, anti-social device.”

The transistor, invented in 1946 at AT&T to improve switching, led to the rise and spread of computerization, and to making the switching system essentially a computer. By 1982, almost half of all calls were switched electronically. The transistor also took computerization from the confines of deep-pocketed corporations and put it in hands of hobbyists.

On March 5, 1975, The Homebrew Computer Club met for the first time, with 32 “enthusiastic people” attending. Apple’s co-founder Steve Wozniak:

Without computer clubs there would probably be no Apple computers. Our club in the Silicon Valley, the Homebrew Computer Club, was among the first of its kind. It was in early 1975, and a lot of tech-type people would gather and trade integrated circuits back and forth. You could have called it Chips and Dips…


The Apple I and II were designed strictly on a hobby, for-fun basis, not to be a product for a company. They were meant to bring down to the club and put on the table during the random access period and demonstrate: Look at this, it uses very few chips. It’s got a video screen. You can type stuff on it. Personal computer keyboards and video screens were not well established then. There was a lot of showing off to other members of the club. Schematics of the Apple I were passed around freely, and I’d even go over to people’s houses and help them build their own.


The Apple I and Apple II computers were shown off every two weeks at the club meeting. “Here’s the latest little feature,” we’d say. We’d get some positive feedback going and turn people on. It’s very motivating for a creator to be able to show what’s being created as it goes on. It’s unusual for one of the most successful products of all time, like the Apple II, to be demonstrated throughout its development.

Apple and other PC makers went on to make a huge impact on workers and how work gets done (and later, on how consumers live their lives). It was difficult for “experts,” however, to predict the exact nature of which workers will be impacted and how.

Yesterday’s futures reveal a lot about what did not happen and why it didn’t. I have in my files a great example of the genre, a report published in 1976 by the Long Range Planning Service of the Stanford Research Institute (SRI), titled “Office of the Future.”

The author of the report (working not far away from where the Homebrew Computer club was meeting) was a Senior Industrial Economist at SRI’s Electronics Industries Research Group, and a “recognized authority on the subject of business automation.” His bio blurb indicates that he “also worked closely with two of the Institute’s engineering laboratories in developing his thinking for this study. The Augmentation Research Center has been putting the office of the future to practical test for almost ten years… Several Information Science Laboratory personnel have been working with state-of-the-art equipment and systems that are the forerunners of tomorrow’s products. The author was able to tap this expertise to gain a balanced picture of the problems and opportunities facing office automation.”

And what was the result of all this research and analysis? The manager of 1985, the report predicted, will not have a personal secretary. Instead he (decidedly not she) will be assisted, along with other managers, by a centralized pool of assistants (decidedly and exclusively, according to the report, of the female persuasion). He will contact the “administrative support center” whenever he needs to dictate a memo to a “word processing specialist,” find a document (helped by an “information storage/retrieval specialist”), or rely on an “administrative support specialist” to help him make decisions.

Of particular interest is the report’s discussion of the sociological factors driving the transition to the “office of the future.” Forecasters often leave out of their analysis the annoying and uncooperative (with their forecast) motivations and aspirations of the humans involved. But this report does consider sociological factors, in addition to organizational, economic, and technological trends. And it’s worth quoting at length what it says on the subject:

“The major sociological factor contributing to change in the business office is ‘women’s liberation.’ Working women are demanding and receiving increased responsibility, fulfillment, and opportunities for advancement. The secretarial position as it exists today is under fire because it usually lacks responsibility and advancement potential. The normal (and intellectually unchallenging) requirements of taking dictation, typing, filing, photocopying, and telephone handling leave little time for the secretary to take on new and more demanding tasks. The responsibility level of many secretaries remains fixed throughout their working careers. These factors can negatively affect the secretary’s motivation and hence productivity. In the automated office of the future, repetitious and dull work is expected to be handled by personnel with minimal education and training. Secretaries will, in effect, become administrative specialists, relieving the manager they support of a considerable volume of work.”

Regardless of the women’s liberation movement of his day, the author could not see beyond the creation of a 2-tier system in which some women would continue to perform dull and unchallenging tasks, while other women would be “liberated” into a fulfilling new job category of “administrative support specialist.”  In this 1976 forecast, there are no women managers.

But this is not the only sociological factor the report missed. The most interesting sociological revolution of the office in the 1980s – and one missing from most (all?) accounts of the PC revolution – is what managers (male and female) did with their new word processing, communicating, calculating machine. They took over some of the “dull” secretarial tasks that no self-respecting manager would deign to perform before the 1980s.

This was the real revolution: The typing of memos (later emails), the filing of documents, the recording, tabulating, and calculating. In short, a large part of the management of office information, previously exclusively in the hands of secretaries, became in the 1980s (and progressively more so in the 1990s and beyond) an integral part of managerial work.

This was very difficult, maybe impossible, to predict. It was a question of status. No manager would type before the 1980s because it was perceived as work that was not commensurate with his status. Many managers started to type in the 1980s because now they could do it with a new “cool” tool, the PC, which conferred on them the leading-edge, high-status image of this new technology. What mattered was that you were important enough to have one of these cool things, not that you performed with it tasks that were considered beneath you just a few years before.

What was easier to predict was the advent of the PC itself. And the SRI report missed this one, too, even though it was aware of the technological trajectory: “Computer technology that in 1955 cost $1 million, was only marginally reliable, and filled a room, is now available for under $25,000 and the size of a desk. By 1985, the same computer capability will cost less than $1000 and fit into a briefcase.”

But the author of the SRI report could only see a continuation of the centralized computing of his day. The report’s 1985 fictional manager views documents on his “video display terminal” and the centralized (and specialized) word processing system of 1976 continues to rule the office ten years later.

This was a failure to predict how the computer that will “fit into a briefcase” will become personal, i.e., will take the place of the “video display terminal” and then augment it as a personal information management tool. And the report also failed to predict the ensuing organizational development in which distributed computing replaced or was added to centralized computing.

So regard the current projections of how many jobs will be destroyed by artificial intelligence with healthy skepticism. No doubt, as in the past, many occupations will be affected by increased computerization and automation. But many current occupations will thrive and new ones will be created, as the way work is done—and our lives progress—will continue to change.

Originally published on

Posted in Automation, Innovation | Leave a comment

Debating the Impact of AI on Jobs



One of this week’s milestones in the history of technology sets the tone to two centuries of debating the impact of machines and artificial intelligence on human work and welfare.

On February 27, 1812, Lord Byron gave his first address as a member of the House of Lords in a parliamentary debate on the Frame Breaking Act which made destroying or damaging lace-machines (stocking frames) a crime punishable by death.

Criticizing the proposed legislation advanced by the government in response to textile workers protesting the industrialization and mechanization of their craft,  Byron told his peers:

The rejected workmen, in the blindness of their ignorance, instead of rejoicing at these improvements in arts so beneficial to mankind, conceived themselves to be sacrificed to improvements in mechanism. In the foolishness of their hearts, they imagined that the maintenance and well doing of the industrious poor, were objects of greater consequence than the enrichment of a few individuals by any improvement in the implements of trade which threw the workmen out of employment, and rendered the labourer unworthy of his hire.

And, it must be confessed, that although the adoption of the enlarged machinery, in that state of our commerce which the country once boasted, might have been beneficial to the master without being detrimental to the servant; yet, in the present situation of our manufactures, rotting in warehouses without a prospect of exportation, with the demand for work and workmen equally diminished, frames of this construction tend materially to aggravate the distresses and discontents of the disappointed sufferers.

The first two cantos of Byron’s Childe Harold’s Pilgrimage came out on March 3, the same week as his maiden speech. He became an overnight superstar, says Fiona MacCarthy, his recent biographer, “the first European cultural celebrity of the modern age.”

Later, in canto 3, Byron wrote about “The child of love,–though born in bitterness, and nurtured in convulsion.” His daughter, born just before her parents separated, was Ada Augusta, later Lady Lovelace, promoter and explicator of Charles Babbage’s Analytical Engine. Unlike Babbage, she saw its potential as a symbol manipulator rather than a mere calculator and the possibilities for a much broader range of automation, beyond speeding up calculation.

Lovelace made clear, however, that what became to be known as “artificial intelligence,” depends on human intelligence and does not replace it. Rather, it augments it, sometimes throwing new light on an established body of knowledge but never developing it from scratch: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.”

More generally, Lovelace argued thatIn considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.” In this she anticipated the oft-repeated Silicon Valley’s maxim (known as Amara’s Law): “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

In 1956, sociologist Daniel Bell wrote in Work and its Discontents about this typical reaction to a new technology in the context of the emerging computerization of work: “Americans, with their tendency to exaggerate new innovations, have conjured up wild fears about changes that automation may bring.” Bell thought that the predictions of factories operating without humans were “silly,” but anticipated that “many workers, particularly older ones, may find it difficult ever again to find suitable jobs. It is also likely that small geographical pockets of the United States may find themselves becoming ‘depressed areas’ as old industries fade or are moved away.”

As computerization or applying “artificial intelligence” to an increasing number of occupations has advanced, many commentators—and government officials—became increasingly alarmed. In 1964, President Johnson established the National Commission on Technology, Automation, and Economic Progress, saying:

Technology is creating both new opportunities and new obligations for us-opportunity for greater productivity and progress–obligation to be sure that no workingman, no family must pay an unjust price for progress.

Automation is not our enemy. Our enemies are ignorance, indifference, and inertia. Automation can be the ally of our prosperity if we will just look ahead, if we will understand what is to come, and if we will set our course wisely after proper planning for the future.

A year later, economist Robert Heilbroner wrote that “as machines continue to invade society, duplicating greater and greater numbers of social tasks, it is human labor itself — at least, as we now think of ‘labor’ — that is gradually rendered redundant.”

In 2003, Kurt Vonnegut wrote:

Do you know what a Luddite is? A person who hates newfangled contraptions… Today we have contraptions like nuclear submarines armed with Poseidon missiles that have H-bombs in their warheads. And we have contraptions like computers that cheat you out of becoming. Bill Gates says, “Wait till you can see what your computer can become.” But it’s you who should be doing the becoming, not the damn fool computer. What you can become is the miracle you were born to be through the work that you do.

In Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2011), Erik Brynjolfsson and Andrew McAfee launched the current phase of the perennial debates over machines and jobs, and the automation and augmentation of human intelligence. They wrote: “In medicine, law, finance, retailing, manufacturing and even scientific discovery, the key to winning the race is not to compete against machines but to compete with machines.”

In 2018, economist Hal Varian argued that predictions based on demographics are the only valid ones:

There’s only one social science that can predict a decade or two in the future, and that’s demography. So we don’t really know where technology will be in 10 years or 20 years, but we have a good idea of how many 25- to 55-year-old people there’ll be in 10 years or 25 years… Baby Boomers are not going to be sources of growth in the labor force anymore because Baby Boomers are retiring…

…there will be fewer workers to support a larger group of nonworking people. And if we don’t increase productivity—which really means automation—then we’re going to be in big trouble in those developed countries…

So the automation, in my view, is coming along just in time—just in time to address this coming period of labor shortages…

The line in Silicon Valley is, “We always overestimate the amount of change that can occur in a year and we underestimate what can occur in a decade”… I think that’s a very good principle to keep in mind.

Originally published on

Posted in Artificial Intelligence, Race Against the Machine | Leave a comment

Reacting to New Technologies: AI, Fake News, and Regulation


A number of this week’s [February 19, 2018] milestones in the history of technology demonstrate society’s reactions to new technologies over the years: A discussion of AI replacing and augmenting human intelligence, a warning about the abundance of misinformation on the internet, and government regulation of a communication platform, suppressing free speech in the name of public interest.

On February 20, 1947, Alan Turing gave a talk at the London Mathematical Society in which he declared that “what we want is a machine that can learn from experience.” Anticipating today’s enthusiasm about machine learning and deep learning, Turing declared that “It would be like a pupil who had learnt much from his master, but had added much more by his own work.  When this happens, I feel that one is obliged to regard the machine as showing intelligence.”

Turing also anticipated the debate over the impact of artificial intelligence on jobs: Does it destroy jobs (automation) or does it help humans do their jobs better and do more interesting things (augmentation)? Turing speculated that digital computers will replace some of the calculation work done at the time by human computers. But “the main bulk of the work done by these [digital] computers will however consist of problems which could not have been tackled by hand computing because of the scale of the undertaking.” (Here he was also anticipating today’s most popular word in Silicon Valley: “scale.”)

Advancing (and again, anticipating) the augmentation argument in the debate over AI and jobs, Turing suggested that humans will be needed to assess the accuracy of the calculations done by digital computers. At the same time, he also predicted the automation of high-value jobs (held by what he called “masters” as opposed to the “slaves” operating the computer) and the possible defense mechanisms invented by what today we call “knowledge workers”:

The masters are liable to get replaced because as soon as any  technique becomes  at  all  stereotyped  it  becomes  possible  to  devise  a  system  of  instruction tables which will enable the electronic computer to do it for itself…

They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of their work  with  mystery  and  make  excuses,  couched  in  well-chosen  gibberish,  whenever  any  dangerous  suggestions  were  made.

Turing concluded his lecture with a plea for expecting intelligent machines to be no more intelligent than humans:

One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge, why should we expect more of a machine? Putting the same point differently, the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards.

What Turing did not anticipate was that digital computers will be used by individuals (as opposed to organizations) to pursue their personal goals, including adding to “the body of knowledge” inaccurate information, intentionally or unwittingly.

It was impossible for Turing to anticipate the evolution of the digital computers of his day into personal computers, smart phones, and the internet. That may have prevented him from seeing the possible parallels with radio broadcasting—a new technology that, only two decades before his lecture, allowed individuals to add to the body of knowledge and transmit whatever they were adding to many other people.

On February 23, 1927, President Calvin Coolidge signed the 1927 Radio Act, creating the Federal Radio Commission (FRC), forerunner of the Federal Communications Commission (FCC), established in 1934.

In “The Radio Act of 1927 as an Act of Progressivism,” Mark Goodman writes:

The technology and growth of radio had outpaced existing Congressional regulation, written in 1912 when radio meant ship-to-shore broadcasting. In the 1920s, by mailing a postcard to Secretary of Commerce Herbert Hoover, anyone with a radio transmitter could broadcast on the frequency chosen by Hoover. The airwaves were an open forum where anyone with the expertise and equipment could reach 25 million listeners.

By 1926, radio in the United States included 15,111 amateur stations, 1,902 ship stations, 553 land stations for maritime use, and 536 broadcasting stations. For those 536 broadcasting stations, the government allocated only eighty-nine wave lengths. The undisciplined and unregulated voice of the public interfered with corporate goals of delivering programming and advertising on a dependable schedule to a mass audience.

Congress faced many difficulties in trying to write legislation. No precedent existed for managing broadcasting except the powerless Radio Act of 1912. No one knew in 1926 where the technology was going nor what radio would be like even the next year, so Congress was trying to write the law to cover all potentialities.

Senator Key Pittman of Nevada expressed his frustration to the Senate chair: “I do not think, sir, that in the 14 years I have been here there has ever been a question before the Senate that in the very nature of the thing Senators can know so little about as this subject.”

Nor was the public much better informed, Pittman noted, even though he received telegrams daily urging passage. “I am receiving many telegrams from my State urging me to vote for this conference report, and informing me that things will go to pieces, that there will be a terrible situation in this country that cannot be coped with unless this report is adopted. Those telegrams come from people, most of whom, I know, know nothing on earth about this bill.”

The Radio Act of 1927 was based on a number of assumptions: That the equality of transmission facilities, reception, and service were worthy political goals; the notion that the spectrum belonged to the public but could be licensed to individuals; and that the number of channels on the spectrum was limited when compared to those who wanted access to it.

Concluding his study of the Radio Act, Mark Goodman writes:

Congress passed the Radio Act of 1927 to bring order to the chaos of radio broadcasting. In the process, Congressional representatives had to deal with several free speech issues, which were resolved in favor of the Progressive concepts of public interest, thereby limiting free speech… Congressmen feared radio’s potential power to prompt radical political or social reform, spread indecent language, and to monopolize opinions.  Therefore, the FRC was empowered to protect listeners from those who would not operate radio for “public interest, convenience, and necessity.”

Regulation stopped the use of the new communication platform by the masses, but six decades later, a new technology gave rise to a new (and massive) communication platform. The invention of the Web and its rapid proliferation in the 1990s, running as a software layer on top of the 20-year-old internet, connecting all computers (and eventually, smart phones) and facilitating greatly the creation and dissemination of information and mis-information, has brought about another explosion in the volume of additions to the body of knowledge, but this time by millions of people worldwide.

On February 25, 1995, Astronomer and author Clifford Stoll wrote in Newsweek:

After two decades online, I’m perplexed. It’s not that I haven’t had a gas of a good time on the Internet. I’ve met great people and even caught a hacker or two. But today, I’m uneasy about this most trendy and oversold community. Visionaries see a future of telecommuting workers, interactive libraries and multimedia classrooms. They speak of electronic town meetings and virtual communities. Commerce and business will shift from offices and malls to networks and modems. And the freedom of digital networks will make government more democratic.

Baloney. Do our computer pundits lack all common sense? The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works…

Every voice can be heard cheaply and instantly. The result? Every voice is heard. The cacophony more closely resembles citizens band radio, complete with handles, harassment, and anonymous threats. When most everyone shouts, few listen…

While the Internet beckons brightly, seductively flashing an icon of knowledge-as-power, this nonplace lures us to surrender our time on earth. A poor substitute it is, this virtual reality where frustration is legion and where—in the holy names of Education and Progress—important aspects of human interactions are relentlessly devalued.

Stoll’s was a lonely voice, mostly because the people with the loudest and most dominant voices at the time, those who owned and produced the news, believed like Stoll that “the truth in no online database will replace your daily newspaper.”

They did not anticipate how easy it will be for individuals to start their own blogs and that some blogs will grow into “new media” publications. They did not anticipate that an online database connecting millions of people will not only become a new dominant communication platform but will also start functioning like a daily newspaper.

Nicholas Thompson and Fred Vogelstein just published in Wired a blow-by-blow account of the most recent two years in the tumultuous life of Facebook, a company suffering from a split-personality disorder: Is it a “dumb pipe” (as dominant communication platforms seeking to avoid responsibility for the content they carry liked to call themselves in the past) or a newspaper?

One manifestation of the clash between Facebook’s “religious tenet” that it is “an open, neutral platform” (as Wired describes it) and its desire to crush Twitter (which is why it “came to dominate how we discover and consume news”), was the hiring and firing of a team of journalists, first attempting to use humans to scrub the news and then trusting the machine to do a better job. In a passage illustrating how prescient Turing was, Thompson and Vogelstein write:

…the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.

Most of the Wired article, however, is not about AI’s impact on jobs but about Facebook’s impact on the “fake news” meme and vice versa. Specifically, it is about the repercussions of Mark Zuckerberg’s assertion two days after the 2016 presidential election that the idea that fake news on Facebook influenced the election in any way “is a pretty crazy idea.”

The article goes from the dismay of Facebookers of their leader’s political insensitivity to a trio of a security researcher, a venture capitalist (and early investor in Facebook), and a design ethicist, banding together to lead the charge against the evil one and talk “to anyone who would listen about Facebook’s poisonous effects on American democracy,” and to the receptive audiences with “their own mounting grievances” against Facebook—the media and Congress. The solution, for some, is regulation, just like with radio broadcasting in the 1920s: “The company won’t protect us by itself, and nothing less than our democracy is at stake,” says Facebook’s former privacy manager.

All this social pressure was too much for Zuckerberg who made a complete about-face during the company Q3 earnings call last year: “I’ve expressed how upset I am that the Russians tried to use our tools to sow mistrust. We build these tools to help people connect and to bring us closer together. And they used them to try to undermine our values.”

After going through a year of “coercive persuasion” (i.e., indoctrination), Zuckerberg has bought into the fake news that fake news can influence elections and destroy democracy. The Wired article never questions this meme, this dogma, this “filter bubble” where everybody accepts something as a given and never questions it. Most (all?) journalists, commentators, and politicians today never question it.

Wired explains Zuckerberg’s politically incorrect dismissal of the idea that fake news on Facebook can influence elections by describing him as someone who likes “to form his opinions from data.” The analysis given to him before he uttered his inconceivable opinion, according to Wired, “was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups.”

Excellent point. But the article, like most (all?) articles on the subject, does not provide any data showing the influence of fake news on voters’ actions.

I have good news for Zuckerberg, real news that may help jump-start his recovery from fake news brainwashing, on the way to, yet again, forming his opinions (and decisions) on the basis of facts. The January 2018 issue of Computer, published by the IEEE Computer Society, leads with an article by Wendy Hall, Ramine Tinati, and Will Jennings of the University of Southampton, titled “From Brexit to Trump: Social Media’s Role in Democracy.” It summarizes their study (and related work) of the role of social media in political campaigns. Their conclusion?

“Our ability to understand the impact that social networks have had on the democratic process is currently very limited.”

Originally published on


Posted in Artificial Intelligence, Facebook, Radio, Turing | Leave a comment

The Undaunted Ambition of American Entrepreneurs and Inventors and the Incredible Hype They Generate

A number of this week’s [February 12, 2018] milestones in the history of technology link the rise of IBM, the introduction of the ENIAC, and the renewed fascination with so-called artificial intelligence.


On February 14, 1924, the Computing-Tabulating-Recording Company (CTR) changed its name to International Business Machines Corporation (IBM). “IBM” was first used for CTR’s subsidiaries in Canada and South America, but after “several years of persuading a slow-moving board of directors,” Thomas and Marva Belden note in The Lengthening Shadow, Thomas J. Watson Sr. succeeded in applying it to the entire company: “International to represent its big aspirations and Business Machines to evade the confines of the office appliance industry.”

As Kevin Maney observes in The Maverick and His Machine, IBM “was still an upstart little company” in 1924, when “revenues climbed to $11 million – not quite back to 1920 levels. (An $11 million company in 1924 was equivalent to a company in 2001 with revenues of $113 million…).”

The upstart, according to Watson, was going to live forever. From a talk he gave at the first meeting of IBM’s Quarter Century Club (employees who have served the company for 25 years), on June 21, 1924:

The opportunities of the future are bound to be greater than those of the past, and to the possibilities of this business there is no limit so long as it holds the loyal cooperation of men and women like yourselves.

And in January 1926, at the One Hundred Percent Club Convention:

This business has a future for your sons and grandsons and your great-grandsons, because it is going on forever.  Nothing in the world will ever stop it. The IBM is not merely an organization of men; it is an institution that will go on forever.

On February 14, 1946, the New York Times announced the unveiling of “an amazing machine that applies electronic speeds for the first time to mathematical tasks hitherto too difficult and cumbersome for solution… Leaders who saw the device in action for the first time,” the report continued “heralded it as a tool with which to begin to rebuild scientific affairs on new foundations.”

With those words, the Electronic Numerical Integrator and Computer (ENIAC), the world´s first large-scale electronic general-purpose digital computer, developed at The Moore School of Electrical Engineering at the University of Pennsylvania in Philadelphia, emerged from the wraps of secrecy under which it had been constructed in the last years of World War II.

“The ENIAC weighed 30 tons, covered 1,500 square feet of floor space, used over 17,000 vacuum tubes (five times more than any previous device), 70,000 resistors,10,000 capacitors, 1,500 relays, and 6,000 manual switches, consumed 174,000 watts of power, and cost about $500,000,” says C. Dianne Martin in her 1995 article “ENIAC: The Press Conference That Shook the World.” After reviewing the press reports about the public unveiling of the ENIAC, she concludes:

Like many other examples of scientific discovery during the last 50 years, the press consistently used exciting imagery and metaphors to describe the early computers. The science journalists covered the development of computers as a series of dramatic events rather than as an incremental process of research and testing. Readers were given hyperbole designed to raise their expectations about the use of the new electronic brains to solve many different kinds of problems.

This engendered premature enthusiasm, which then led to disillusionment and even distrust of computers on the part of the public when the new technology did not live up to these expectations.

The premature enthusiasm and exciting imagery was not confined only to ENIAC or the press. In the same vain, and the same year (1946), Waldemar Kaempffert reported in The New York Times:

Crude in comparison with brains as the computing machines may be that solve in a few seconds mathematical problems that would ordinarily take hours, they behave as if they had a will of their own. In fact, the talk at the meeting of the American Institute of Electrical Engineers was half electronics, half physiology. One scientist excused the absence of a colleague, the inventor of a new robot, with the explanation that ‘he couldn’t bear to leave the machine at home alone’ just as if it were a baby.

On February 15, 1946, The Electronic Numerical Integrator And Computer (ENIAC) was formally dedicated at the University of Pennsylvania. Thomas Haigh, Mark Priestley and Crispin Rope in ENIAC in Action: Making and Remaking the Modern Computer:

ENIAC established the feasibility of high-speed electronic computing, demonstrating that a machine containing many thousands of unreliable vacuum tubes could nevertheless be coaxed into uninterrupted operation for long enough to do something useful.

During an operational life of almost a decade ENIAC did a great deal more than merely inspire the next wave of computer builders. Until 1950 it was the only fully electronic computer working in the United States, and it was irresistible to many governmental and corporate users whose mathematical problems required a formerly infeasible amount of computational work. By October of 1955, when ENIAC was decommissioned, scores of people had learned to program and operate it, many of whom went on to distinguished computing careers.

Reviewing ENIAC in Action, I wrote:

Today’s parallel to the ENIAC-era big calculation is big data, as is the notion of “discovery” and the abandonment of hypotheses. “One set initial parameters, ran the program, and waited to see what happened” is today’s The unreasonable effectiveness of data.  There is a direct line of scientific practice from the ENIAC pioneering simulations to “automated science.” But is the removal of human imagination from scientific practice good for scientific progress?

Similarly, it’s interesting to learn about the origins of today’s renewed interest in, fascination with, and fear of “artificial intelligence.” Haigh, Priestley and Rope argue against the claim that the “irresponsible hyperbole” regarding early computers was generated solely by the media, writing that “many computing pioneers, including John von Neumann, [conceived] of computers as artificial brains.”

Indeed, in his A First Draft of a Report on the EDVAC—which became the foundation text of modern computer science (or more accurately, computer engineering practice)—von Neumann compared the components of the computer to “the neurons of higher animals.” While von Neumann thought that the brain was a computer, he allowed that it was a complex one, following McCulloch and Pitts (in their 1943 paper “A Logical Calculus of the Ideas Immanent in Nervous Activity”) in ignoring “the more complicated aspects of neuron functioning,” he wrote.

Given that McCulloch said about the “neurons” discussed in his and Pitts’ seminal paper that they “were deliberately as impoverished as possible,” what we have at the dawn of “artificial intelligence” is simplification squared, based on an extremely limited (possibly non-existent at the time) understanding of how the human brain works.

These mathematical exercises, born out of the workings of very developed brains but not mimicking or even remotely describing them, led to the development of “artificial neural networks” which led to “deep learning” which led to general excitement today about computer programs “mimicking the brain” when they succeed in identifying cat images or beating a Go champion.

In 1949, computer scientist Edmund Berkeley wrote in his book, Giant Brains or Machines that Think: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”

Haigh, Priestley and Rope write that “…the idea of computers as brains was always controversial, and… most people professionally involved with the field had stepped away from it by the 1950s.” But thirty years later, Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.”

Most computer scientists by that time were indeed occupied by less lofty goals than playing God, but only very few objected to these kind of statements, and to Minsky receiving the most prestigious award in their profession (for establishing the field of artificial intelligence). The idea that computers and brains are the same thing, today leads people with very developed brains to conclude that if computers can win in Go, they can think, and that with just a few more short steps up the neural networks evolution ladder, computers will reason that it’s in their best interests to destroy humanity.

On February 15, 2011, IBM’s computer Watson commented on the results of his match the previous night with two Jeopardy champions: “There is no way I’m going to let these simian creatures defeat me. While they’re sleeping, I’m processing countless terabytes of useless information.”

The last bit, of course, is stored in “his” memory under the category “Oscar Wilde.”

Originally published on

Posted in Computer history, IBM | Leave a comment

Deep Blue and Deep Learning: It’s all About Brute Force

RobotHumanThere are interesting parallels between one of this week’s milestones in the history of technology and the current excitement and anxiety about artificial intelligence (AI).

On February 10, 1996, IBM’s Deep Blue became the first machine to win a chess game against a reigning world champion, Garry Kasparov. Kasparov won three and drew two of the following five games, defeating Deep Blue by a score of 4–2.  In May 1997, an upgraded version of Deep Blue won the six-game rematch 3½–2½ to become the first computer to defeat a reigning world champion in a match under standard chess tournament time controls.

Deep Blue was an example of so-called “artificial intelligence” achieved through “brutforce,” the super-human calculating speed that has been the hallmark of digital computers since they were invented in the 1940s. Deep Blue was a specialized, purpose-built computer, the fastest to face a chess world champion, capable of examining 200 million moves per second, or 50 billion positions, in the three minutes allocated for a single move in a chess game.

To many observers, this was another milestone in man’s quest to build a machine in his own image and another indicator that it’s just a matter of time before we create a self-conscious machine complex enough to mimic the brain and display human-like intelligence or even super-intelligence.

An example of such “the mind is a meat machine” (to quote Marvin Minsky) philosophy is Charles Krauthammer’s “Be Afraid” in the Weekly Standard, May 26, 1997. To Krauthammer, Deep Blue’s win in the 1996 match was due to “brute force” calculation, which is not artificial intelligence, he says, just faster calculation of a much wider range of possible tactical moves.

But one specific move in Game 2 of the 1997 match, a game that Kasparov based not on tactics, but on strategy (where human players have a great advantage over machines), was “the lightning flash that shows us the terrors to come.” Krauthammer continues:

What was new about Game Two… was that the machine played like a human. Grandmaster observers said that had they not known who was playing they would have imagined that Kasparov was playing one of the great human players, maybe even himself. Machines are not supposed to play this way… To the amazement of all, not least Kasparov, in this game drained of tactics, Deep Blue won. Brilliantly. Creatively. Humanly. It played with — forgive me — nuance and subtlety.

Fast forward to March 2016, to Cade Metz writing in Wired on Go champion Lee Sedol’s loss to AlphaGo at the Google DeepMind Challenge Match. In “The AI Behind AlphaGo Can Teach Us About Being Human,” Metz reported on yet another earth-shattering artificial-intelligence-becoming-human-intelligence move:

Move 37 showed that AlphaGo wasn’t just regurgitating years of programming or cranking through a brute-force predictive algorithm. It was the moment AlphaGo proved it understands, or at least appears to mimic understanding in a way that is indistinguishable from the real thing. From where Lee sat, AlphaGo displayed what Go players might describe as intuition, the ability to play a beautiful game not just like a person but in a way no person could.

AlphaGo used 1,920 Central Processing Units (CPU) and 280 Graphics Processing Units (GPU, according to The Economist, and possibly additional proprietary Tensor Processing Units, for a lot of hardware power, plus brute force statistical analysis software known as Deep Neural Networks, or more popularly as Deep Learning.

Still, Google’s programmers have not dissuaded anyone from believing they are creating human-like machines and often promoted the idea (the only Google exception I know of is Peter Norvig, but he is neither a member of the Google Brain nor of the Google DeepMind teams, Google’s AI avant-garde).

IBM’s programmers, in contrast, were more modest. Krauthammer quotes Joe Hoane, one of Deep Blue’s programmers, answering the question “How much of your work was devoted specifically to artificial intelligence in emulating human thought?” Hoane’s answer: “No effort was devoted to [that]. It is not an artificial intelligence project in any way. It is a project in — we play chess through sheer speed of calculation and we just shift through the possibilities and we just pick one line.”

So the earth-shattering moves may have been just a bug in the software. But that explanation escaped observers, then and now, preferring to believe that humans can create intelligent machines (“giant brains” as they were called in the early days of very fast calculators) because the only difference between humans and machines is the degree of complexity, the sheer number of human or artificial neurons firing. Here’s Krauthammer:

You build a machine that does nothing but calculation and it crosses over and creates poetry. This is alchemy. You build a device with enough number-crunching algorithmic power and speed—and, lo, quantity becomes quality, tactics becomes strategy, calculation becomes intuition… After all, how do humans get intuition and thought and feel? Unless you believe in some metaphysical homunculus hovering over (in?) the brain directing its bits and pieces, you must attribute our strategic, holistic mental abilities to the incredibly complex firing of neurons in the brain.

We are all materialists now. Or almost all of us. Read here and (especially) here for a different take.

If you are not interested in philosophical debates (and prefer to ignore the fact that the dominant materialist paradigm affects—through government policies, for example—many aspects of your life), at least read Tom Simonite excellent Wired article “AI Beat Humans at Reading! Maybe not” in which he shows how exaggerated are recent various claims for AI “breakthroughs.”

Beware of fake AI news and be less afraid.

Originally published on

Posted in Artificial Intelligence | Leave a comment

From Social Security to Social Anxiety


Ida M. Fuller with the first social security check.

Two of this week’s milestones in the history of technology highlight the evolution in the use of computing machines from supporting social security to boosting social cohesion and social anxiety.

On January 31, 1940, Ida M. Fuller became the first person to receive an old-age monthly benefit check under the new Social Security law. Her first check was for $22.54. The Social Security Act was signed into law by Franklin Roosevelt on August 14, 1935. Kevin Maney in The Maverick and His Machine: “No single flourish of a pen had ever created such a gigantic information processing problem.”

But IBM was ready. Its President, Thomas Watson, Sr., defied the odds and during the early years of the Depression continued to invest in research and development, building inventory, and hiring people. As a result, according to Maney, “IBM won the contract to do all of the New Deal’s accounting – the biggest project to date to automate the government. … Watson borrowed a common recipe for stunning success: one part madness, one part luck, and one part hard work to be ready when luck kicked in.”

The nature of processing information before computers is evident in the description of the building in which the Social Security administration was housed at the time:

The most prominent aspect of Social Security’s operations in the Candler Building was the massive amount of paper records processed and stored there.  These records were kept in row upon row of filing cabinets – often stacked double-decker style to minimize space requirements.  One of the most interesting of these filing systems was the Visible Index, which was literally an index to all of the detailed records kept in the facility.  The Visible Index was composed of millions of thin bamboo strips wrapped in paper upon which specialized equipment would type every individual’s name and Social Security number.  These strips were inserted onto metal pages which were assembled into large sheets. By 1959, when Social Security began converting the information to microfilm, there were 163 million individual strips in the Visible Index.

On January 1, 2011, the first members of the baby boom generation reached retirement age. The number of retired workers is projected to grow rapidly and will double in less than 30 years. People are also living longer, and the birth rate is low. As a result, the ratio of workers paying Social Security taxes to people collecting benefits will fall from 3.0 to 1 in 2009 to 2.1 to 1 in 2031.

In 1955, the 81-year-old Ida Fuller (who died on January 31, 1975, aged 100, after collecting $22,888.92 from Social Security monthly benefits, compared to her contributions of $24.75) said: “I think that Social Security is a wonderful thing for the people. With my income from some bank stock and the rental from the apartment, Social Security gives me all I need.”

Sixty-four years after the first social security check was issued, paper checks were replaced by online transactions and letters as the primary form of person-to-person communications were replaced by web-based social networks.

On February 4, 2004, Facebook was launched when went live at Harvard University. Its home screen read, says David Kirkpatrick in The Facebook Effect, “Thefacebook is an online directory that connects people though social networks at colleges.” Zuckerberg’s classmate Andrew McCollum designed a logo using an image of Al Pacino he’s found online that he covered with a fog of ones and zeros.


Four days after the launch, more than 650 students had registered and by the end of May, it was operating in 34 schools and had almost 100,000 users. “The nature of the site,” Zuckerberg told the Harvard Crimson on February 9, “is such that each user’s experience improves if they can get their friends to join in.” In late 2017, Facebook had 2.07 billion monthly active users.

Successfully connecting more than a third of the world’s adult (15+) population brings a lot of scrutiny. “Is Social Media the Tobacco Industry of the 21st Century?” asked an article on recently, summing up the current sentiment about Facebook.

The most discussed complaint is that Facebook is bad for democracy, aiding and abetting the rise of “fake news” and “echo chambers.”

Why blame the network for what its users do with it? And how exactly what American citizens do with it impact their freedom to vote in American elections?

Consider the social network of the 18th century. On November 2, 1772, the town of Boston established a Committee of Correspondence as an agency to organize a public information network in Massachusetts. The Committee drafted a pamphlet and a cover letter which it circulated to 260 Massachusetts towns and districts, instructing them in current politics and inviting each to express its views publicly. In each town, community leaders read the pamphlet aloud and the town’s people discussed, debated, and chose a committee to draft a response which was read aloud and voted upon.

When 140 towns responded and their responses published in the newspapers, “it was evident that the commitment to informed citizenry was widespread and concrete” according to Richard D. Brown (in Chandler and Cortada, eds., A Nation Transformed by Information). But why this commitment?

In Liah Greenfeld‘s words (in Nationalism: Five Roads to Modernity), “Americans had a national identity… which in theory made every individual the sole legitimate representative of his own interests and equal participant in the political life of the collectivity. It was grounded in the values of reason, equality, and individual liberty.”

The Internet is not “inherently” democratizing, no more than the telegraph ever was and no matter how much people have always wanted to believe in the power of technology to transform society. Believing in and upholding the right values for a long period of time—individual liberty and responsibility of informed citizenry making its own decisions while debating, discussing, and sharing information (and mis-information)—is what makes societies democratic.

More than a century after the establishment of the first social network in the U.S., the citizenry was informed (and mis-informed) by hundreds of newspapers, mostly sold by newspaper boys on the streets. After paying a visit to the United States, Charles Dickens described (in Martin Chuzzlewit, 1844) the newsboys greeting a ship in New York Harbor: “’Here’s this morning’s New York Stabber! Here’s the New York Family Spy! Here’s the New York Private Listener! … Here’s the full particulars of the patriotic loco-foco movement yesterday, in which the whigs were so chawed up, and the last Alabama gauging case … and all the Political, Commercial and Fashionable News. Here they are!’ … ‘It is in such enlightened means,’ said a voice almost in Martin’s ear, ‘that the bubbling passions of my country find a vent.’”

Another visitor from abroad, the Polish writer Henryk Sienkiewicz, could discern (in Portrait of America, 1876) in the mass circulation of newspapers, the American belief about the universal need for information: “In Poland, a newspaper subscription tends to satisfy purely intellectual needs and is regarded as somewhat of a luxury which the majority of the people can heroically forego; in the United States a newspaper is regarded as a basic need of every person, indispensable as bread itself.”

Basic need for information, of all kinds, as Mark Twain observed (in Pudd’nhead Wilson, 1897): “The old saw says, ‘Let a sleeping dog lie.’ Right. Still, when there is much at stake, it is better to get a newspaper to do it.”

Blaming Facebook for fake news is like blaming the newspaper boys for the fake news the highly partisan 19th century newspapers were in the habit of publishing. Somehow, American democracy survived.

Consider a more recent fact: According to a Gallup/Knight Foundation survey, the American public divides evenly on the question of who is primarily responsible for ensuring people have an accurate and politically balanced understanding of the news—48% say the news media (“by virtue of how they report the news and what stories they cover”) and 48% say individuals themselves (“by virtue of what news sources they use and how carefully they evaluate the news”).

Half of the American public believes someone else is responsible for feeding them the correct “understanding of the news.” Facebook had little to do with the erosion of the belief in individual responsibility, but it certainly feeling the impact of drifting away from the values upheld by the users of the 18th century social network.

Originally published on

Posted in Facebook, IBM | Leave a comment