From Social Security to Social Anxiety

idafuller

Ida M. Fuller with the first social security check.

Two of this week’s milestones in the history of technology highlight the evolution in the use of computing machines from supporting social security to boosting social cohesion and social anxiety.

On January 31, 1940, Ida M. Fuller became the first person to receive an old-age monthly benefit check under the new Social Security law. Her first check was for $22.54. The Social Security Act was signed into law by Franklin Roosevelt on August 14, 1935. Kevin Maney in The Maverick and His Machine: “No single flourish of a pen had ever created such a gigantic information processing problem.”

But IBM was ready. Its President, Thomas Watson, Sr., defied the odds and during the early years of the Depression continued to invest in research and development, building inventory, and hiring people. As a result, according to Maney, “IBM won the contract to do all of the New Deal’s accounting – the biggest project to date to automate the government. … Watson borrowed a common recipe for stunning success: one part madness, one part luck, and one part hard work to be ready when luck kicked in.”

The nature of processing information before computers is evident in the description of the building in which the Social Security administration was housed at the time:

The most prominent aspect of Social Security’s operations in the Candler Building was the massive amount of paper records processed and stored there.  These records were kept in row upon row of filing cabinets – often stacked double-decker style to minimize space requirements.  One of the most interesting of these filing systems was the Visible Index, which was literally an index to all of the detailed records kept in the facility.  The Visible Index was composed of millions of thin bamboo strips wrapped in paper upon which specialized equipment would type every individual’s name and Social Security number.  These strips were inserted onto metal pages which were assembled into large sheets. By 1959, when Social Security began converting the information to microfilm, there were 163 million individual strips in the Visible Index.

On January 1, 2011, the first members of the baby boom generation reached retirement age. The number of retired workers is projected to grow rapidly and will double in less than 30 years. People are also living longer, and the birth rate is low. As a result, the ratio of workers paying Social Security taxes to people collecting benefits will fall from 3.0 to 1 in 2009 to 2.1 to 1 in 2031.

In 1955, the 81-year-old Ida Fuller (who died on January 31, 1975, aged 100, after collecting $22,888.92 from Social Security monthly benefits, compared to her contributions of $24.75) said: “I think that Social Security is a wonderful thing for the people. With my income from some bank stock and the rental from the apartment, Social Security gives me all I need.”

Sixty-four years after the first social security check was issued, paper checks were replaced by online transactions and letters as the primary form of person-to-person communications were replaced by web-based social networks.

On February 4, 2004, Facebook was launched when Thefacebook.com went live at Harvard University. Its home screen read, says David Kirkpatrick in The Facebook Effect, “Thefacebook is an online directory that connects people though social networks at colleges.” Zuckerberg’s classmate Andrew McCollum designed a logo using an image of Al Pacino he’s found online that he covered with a fog of ones and zeros.

Thefacebook

Four days after the launch, more than 650 students had registered and by the end of May, it was operating in 34 schools and had almost 100,000 users. “The nature of the site,” Zuckerberg told the Harvard Crimson on February 9, “is such that each user’s experience improves if they can get their friends to join in.” In late 2017, Facebook had 2.07 billion monthly active users.

Successfully connecting more than a third of the world’s adult (15+) population brings a lot of scrutiny. “Is Social Media the Tobacco Industry of the 21st Century?” asked an article on Forbes.com recently, summing up the current sentiment about Facebook.

The most discussed complaint is that Facebook is bad for democracy, aiding and abetting the rise of “fake news” and “echo chambers.”

Why blame the network for what its users do with it? And how exactly what American citizens do with it impact their freedom to vote in American elections?

Consider the social network of the 18th century. On November 2, 1772, the town of Boston established a Committee of Correspondence as an agency to organize a public information network in Massachusetts. The Committee drafted a pamphlet and a cover letter which it circulated to 260 Massachusetts towns and districts, instructing them in current politics and inviting each to express its views publicly. In each town, community leaders read the pamphlet aloud and the town’s people discussed, debated, and chose a committee to draft a response which was read aloud and voted upon.

When 140 towns responded and their responses published in the newspapers, “it was evident that the commitment to informed citizenry was widespread and concrete” according to Richard D. Brown (in Chandler and Cortada, eds., A Nation Transformed by Information). But why this commitment?

In Liah Greenfeld‘s words (in Nationalism: Five Roads to Modernity), “Americans had a national identity… which in theory made every individual the sole legitimate representative of his own interests and equal participant in the political life of the collectivity. It was grounded in the values of reason, equality, and individual liberty.”

The Internet is not “inherently” democratizing, no more than the telegraph ever was and no matter how much people have always wanted to believe in the power of technology to transform society. Believing in and upholding the right values for a long period of time—individual liberty and responsibility of informed citizenry making its own decisions while debating, discussing, and sharing information (and mis-information)—is what makes societies democratic.

More than a century after the establishment of the first social network in the U.S., the citizenry was informed (and mis-informed) by hundreds of newspapers, mostly sold by newspaper boys on the streets. After paying a visit to the United States, Charles Dickens described (in Martin Chuzzlewit, 1844) the newsboys greeting a ship in New York Harbor: “’Here’s this morning’s New York Stabber! Here’s the New York Family Spy! Here’s the New York Private Listener! … Here’s the full particulars of the patriotic loco-foco movement yesterday, in which the whigs were so chawed up, and the last Alabama gauging case … and all the Political, Commercial and Fashionable News. Here they are!’ … ‘It is in such enlightened means,’ said a voice almost in Martin’s ear, ‘that the bubbling passions of my country find a vent.’”

Another visitor from abroad, the Polish writer Henryk Sienkiewicz, could discern (in Portrait of America, 1876) in the mass circulation of newspapers, the American belief about the universal need for information: “In Poland, a newspaper subscription tends to satisfy purely intellectual needs and is regarded as somewhat of a luxury which the majority of the people can heroically forego; in the United States a newspaper is regarded as a basic need of every person, indispensable as bread itself.”

Basic need for information, of all kinds, as Mark Twain observed (in Pudd’nhead Wilson, 1897): “The old saw says, ‘Let a sleeping dog lie.’ Right. Still, when there is much at stake, it is better to get a newspaper to do it.”

Blaming Facebook for fake news is like blaming the newspaper boys for the fake news the highly partisan 19th century newspapers were in the habit of publishing. Somehow, American democracy survived.

Consider a more recent fact: According to a Gallup/Knight Foundation survey, the American public divides evenly on the question of who is primarily responsible for ensuring people have an accurate and politically balanced understanding of the news—48% say the news media (“by virtue of how they report the news and what stories they cover”) and 48% say individuals themselves (“by virtue of what news sources they use and how carefully they evaluate the news”).

Half of the American public believes someone else is responsible for feeding them the correct “understanding of the news.” Facebook had little to do with the erosion of the belief in individual responsibility, but it certainly feeling the impact of drifting away from the values upheld by the users of the 18th century social network.

Originally published on Forbes.com

Posted in Facebook, IBM | Leave a comment

How Steve Jobs and Thomas Watson Sr. Sold AI to the Public

ssec5A number of this week’s milestones in the history of technology showcase two prominent computer industry showmen, Steve Jobs and Thomas Watson Sr., their respective companies, Apple and IBM, and how they sold smart machines to the general public.

On January 22, 1984, the Apple Macintosh was introduced in the “1984” television commercial aired during Super Bowl XVIII. The commercial was later called by Advertising Age “the greatest commercial ever made.” A few months earlier, Steve Jobs said this before showing a preview of the commercial:

It is now 1984. It appears IBM wants it all. Apple is perceived to be the only hope to offer IBM a run for its money. Dealers initially welcoming IBM with open arms now fear an IBM dominated and controlled future. They are increasingly turning back to Apple as the only force that can ensure their future freedom. IBM wants it all and is aiming its guns on its last obstacle to industry control: Apple. Will Big Blue dominate the entire computer industry? The entire information age? Was George Orwell right about 1984?

Thirty-six years earlier, another master promoter, the one who laid the foundation for big blue domination, intuitively understood the importance of making machines endowed with artificial intelligence (or “giant brains” as they were called at the time) palatable to the general public.

On January 27, 1948, IBM announced the Selective Sequence Electronic Calculator (SSEC) and demonstrated it to the public. “The most important aspect of the SSEC,” according to Brian Randell in the Origins of Digital Computers, “was that it could perform arithmetic on, and then execute, stored instructions – it was almost certainly the first operational machine with these capabilities.”

As Kevin Maney explains in The Maverick and his Machine, IBM’s CEO, Thomas Watson Sr., “didn’t know much about how to build an electronic computer,” but in 1947, he “was the only person on earth who knew how to sell” one. Maney:

The engineers finished testing the SSEC in late 1947 when Watson made a decision that forever altered the public perception of computers and linked IBM to the new generation of information machines. He told the engineers to disassemble the SSEC and set it up in the ground floor lobby of IBM’s 590 Madison Avenue headquarters. The lobby was open to the public and its large windows allowed a view of the SSEC for the multitudes cramming the sidewalks on Madison and 57th street. … The spectacle of the SSEC defined the public’s image of a computer for decades. Kept dust-free behind glass panels, reels of electronic tape ticked like clocks, punches stamped out cards and whizzed them into hoppers, and thousands of tiny lights flashed on and off in no discernable pattern… Pedestrians stopped to gawk and gave the SSEC the nickname “Poppy.” … Watson took the computer out of the lab and sold it to the public.

The SSEC ran at 590 Madison Ave. until July 1952 when it was replaced by a new IBM computer, the first to be mass-produced. According to Columbia University’s website for the SSEC, it “inspired a generation of cartoonists to portray the computer as a series of wall-sized panels covered with lights, meters, dials, switches, and spinning rolls of tape.”

As IBM was one of a handful of computer pioneers establishing a new industry, Watson’s key selling point to the general public was not challenging the alleged thought control of a dominant competitor as Steve Jobs will do more than three decades later, but extolling computer-aided thought expansion: “…to explore the consequences of man’s thought to the outermost reaches of time, space, and physical conditions.” Watson was the first to see that “AI” stood not only for “artificial intelligence” but also for human “augmented intelligence.”

Like his better-known successor more than three decades later, Thomas Watson Sr. was a perfectionist. When he reviewed the SSEC “exhibition” prior to the public unveiling, he remarked that “The sweep of this room is hindered by those large black columns down the center. Have them removed before the ceremony.” But since they supported the building, the columns stayed. Instead, the photo in the brochure handed out at the ceremony (see image at the top of this article) was carefully retouched to remove all traces of the offending columns.

IBM became the dominant computer company and, because it “wanted it all,” entered the new PC market in August 1981. Apple failed in its initial response, the Apple Lisa, but following the airing of the “1984” TV commercial, the Apple Macintosh was launched on January 24, 1984. It was the first mass-market personal computer featuring a graphical user interface and a mouse, offering two applications, MacWrite and MacPaint, designed to show off its innovative interface. By April 1984, 50,000 Macintoshes were sold.

Steven Levy announced in Rolling Stones “This [is] the future of computing.” The magazine’s 1984 article is full of quotable quotes. From Steve Jobs:

I don’t want to sound arrogant, but I know this thing is going to be the next great milestone in this industry. Every bone in my body says it’s going to be great, and people are going to realize that and buy it.

From Bill Gates

People concentrate on finding Jobs’ flaws, but there’s no way this group could have done any of this stuff without Jobs. They really have worked miracles.

From Mitch Kapor, developer of Lotus 1-2-3, a best-selling program for the IBM PC:

The IBM PC is a machine you can respect. The Macintosh is a machine you can love.

Here’s Steve Jobs introducing the Macintosh at the Apple shareholders meeting on January 24, 1984. And the Mac said: “Never trust a computer you cannot lift.”

In January 1984, I started working for NORC, a social science research center at the University of Chicago. Over the next 12 months or so, I’ve experienced the shift from large, centralized computers to personal ones and the shift from a command-line to a graphical user interface.

I was responsible, among other things, for managing $2.5 million in survey research budgets. At first, I used the budget management application running on the University’s VAX mini-computer (mini, as opposed to mainframe). I would logon using a remote terminal, type some commands and enter the new numbers I needed to record. Then, after an hour or two of hard work, I pressed a key on the terminal, telling the VAX to re-calculate the budget with the new data I’ve entered. To this day, I remember my great frustration and dismay when the VAX came back telling me something was wrong with the data I entered. Telling me what exactly was wrong was beyond what the VAX—or any other computer program at that time—could do (this was certainly true in the case of the mini-computer accounting program I used).

I had to start the work from the beginning and hope that on the second or third try I will get everything right and the new budget spreadsheet will be created.  This, by the way, was no different from my experience working for a bank a few years before, where I totaled by hand on an NCR accounting machine the transactions for the day. Quite often I would get to the end of the pile of checks only to find out that the accounts didn’t balance because somewhere I entered a wrong number. And I would have to enter all the data again.

This linear approach to accounting and finance changed in 1979 when Dan Bricklin and Bob Frankston invented Visicalc, the first electronic spreadsheet and the first killer app for personal computers.

Steven Levy has described the way financial calculations were done at the time (on paper!) and Brickiln’s epiphany in 1978 when he was a student at the Harvard Business School:

The problem with ledger sheets was that if one monthly expense went up or down, everything – everything – had to be recalculated. It was a tedious task, and few people who earned their MBAs at Harvard expected to work with spreadsheets very much. Making spreadsheets, however necessary, was a dull chore best left to accountants, junior analysts, or secretaries. As for sophisticated “modeling” tasks – which, among other things, enable executives to project costs for their companies – these tasks could be done only on big mainframe computers by the data-processing people who worked for the companies Harvard MBAs managed.

Bricklin knew all this, but he also knew that spreadsheets were needed for the exercise; he wanted an easier way to do them. It occurred to him: why not create the spreadsheets on a microcomputer?

At NORC, I experienced first-hand the power of that idea when I started managing budgets with Visicalc, running on an Osborne laptop. Soon thereafter I migrated to the first IBM PC at NORC which ran the invention of another HBS student, Mitch Kapor, who was also frustrated with re-calculation and other delights of paper or electronic spreadsheets running on large computers.

Lotus 1-2-3 was an efficient tool for managing budgets that managers could use themselves without wasting time on figuring out what data entry mistake they made. You had complete control of the numbers and the processing of the data, you didn’t have to wait for a remote computer to do the calculations only to find out you need to enter the data again. To say nothing, of course, about modeling, what-if scenarios, the entire range of functions at your fingertips.

But in many respects, the IBM PC (and other PCs) was a mainframe on a desk. Steve Jobs and the Lisa and Macintosh teams changed that and brought us an interface that made computing easy, intuitive, and fun. NORC got 80 Macs that year, mostly used for computer-assisted interviewing. I don’t think there was any financial software available for the Mac at the time and I continued to use Lotus 1-2-3 on the IBM PC. But I played with the Mac any opportunity I got. Indeed, there was nothing like it at the time.

It took sometime before the software running on most PCs adapted to the new personal approach to computing, but eventually Microsoft Windows came along and icons and folders ruled the day. Microsoft also crushed all other electronic spreadsheets with Excel and did the same to other word-processing and presentation tools.

But Steve Jobs triumphed at the end with yet another series of inventions. At the introduction of the iPhone in 2007, he should have said (or let the iPhone say): “Never trust a computer you cannot put it in your pocket.” Or “Never trust a computer you cannot touch.” Today, he might have said “Never trust a computer you cannot talk to.” What he would have said ten years from now? “Never trust a computer you cannot merge with”?

Originally published on Forbes.com

Posted in Apple, Computer history, IBM | Leave a comment

Inventing and Fumbling the Future

apple_lisa_werbungThirty-five years ago this week, Apple introduced a computer that changed the way people communicated with their electronic devices, using graphical icons and visual indicators rather than punched cards or text-based commands.

On January 19, 1983, Apple introduced Lisa, a $9,995 PC for business users. Many of its innovations such as the graphical user interface, a mouse, and document-centric computing, were taken from the Alto computer developed at Xerox PARC, introduced as the $16,595 Xerox Star in 1981.

Steve Jobs recalled (in Walter Isaacson’s Steve Jobs) that he and the Lisa team were very relieved when they saw the Xerox Star: “We knew they hadn’t done it right and that we could–at a fraction of the price.” Isaacson says that “The Apple raid on Xerox PARC is sometimes described as one of the biggest heists in the chronicles of industry” and quotes Jobs on the subject: “Picasso had a saying–‘good artists copy, great artists steal’—and we have always been shameless about stealing great ideas… They [Xerox management] were copier-heads who had no clue about what a computer could do… Xerox could have owned the entire computer industry.”

jobs_lisa_132x200

The story of how Xerox invented the future but failed to realize it has become a popular urban legend in tech circles, especially after the publication in 1998 of Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer by D.K. Smith and R.C. Alexander. Moshe Vardi, the former Editor-in-Chief of the Communications of the ACM (CACM), recounted a few years ago his own story of fumbling the future, as a member of a 1989 IBM research team that produced a report envisioning an “information-technology-enabled 21st-century future.”

The IBM report got right some of the implications of its vision of a “global, multi-media, videotext-like utility.” For example, it predicted a reduced need for travel agents, a flood of worthless information, and how “fast dissemination of information through a global information utility” will increase the volatility of politics, diplomacy, and “other aspects of life.”

Vardi also brought to his readers’ attention a video on the future produced by AT&T in 1993 “with a rather clear vision of the future, predicting what was then revolutionary technology, such as paying tolls without stopping and reading books on computers.”

What can we learn from these yesterday’s futures? Vardi correctly concluded that “The future looks clear only in hindsight. It is rather easy to practically stare at it and not see it. It follows that those who did make the future happen deserve double and triple credit. They not only saw the future, but also trusted their vision to follow through, and translated vision to execution.”

But what exactly those who “fumbled the future” did not see? More important, what is it that we should understand now about how their future has evolved?

The IBM report and the AT&T video look prescient today but they repeated many predictions that were made years before 1989 and 1993. These predictions eventually became a reality but it is how we got there that these descriptions of the future missed. To paraphrase Lewis Carroll, if you know where you are going, it matters a lot which road you are taking.

The IBM report says: “In some sense, the proposed vision may not appear to be revolutionary: the envisioned system might be dismissed as a safely predictable extrapolation from and merging of existing information tools that it may complement or even replace.” I would argue that the vision, for both IBM and AT&T, was not just an “extrapolation of existing information tools,” but also an extrapolation of their existing businesses—what they wanted the future to be. Their vision was based on the following assumptions:

  1. The business/enterprise market will be the first to adopt and use the global information utility; the consumer/home market will follow. IBM: “the private home consumer market would probably be the last to join the system because of yet unclear needs for such services and the initial high costs involved.” And: “An important vehicle to spur the development of home applications will be business applications.”
  2. The global information utility will consist of a “global communications network” and “information services” riding on top of it. It will be costly to construct and the performance and availability requirements will be very high. IBM: “Once an information utility is meant to be used and depended on as a ‘multi-media telephone’ system, it must live up to the telephone system RAS [Reliability, Availability, and Serviceability] requirements, which go far beyond most of today’s information systems.” And: “Without 24-hour availability and low MTTR [Mean Time To Repair/Restore] figures, no subscriber will want to rely on such a utility.”
  3. Information will come from centralized databases developed by established information providers (companies) and will be pushed over the network to the users when they request it on a “pay-as-you-go” basis.

When Vardi wrote that “it is practically easy to stare at [the future] and not see it,” he probably meant the Internet, which no doubt all of the authors of the IBM report were familiar with (in 1989, a 20-year-old open network connecting academic institutions, government agencies and some large corporations). But neither IBM nor AT&T (nor other established IT companies) cared much about it because it was not “robust” enough and would not answer the enterprise-level requirements of their existing customers. Moreover, they did not control it, as they controlled their own private networks.

Now, before you say “innovator’s dilemma,” let me remind you (and Professor Christensen) that there were many innovators outside the established IT companies in the 1980s and early 1990s that were pursuing the vision that is articulated so beautifully in the IBM report. The most prominent examples – and for a while, successful – were CompuServe and AOL. A third, Prodigy, was a joint venture of IBM, CBS, and Sears. So, as a matter of fact, even the established players were trying to innovate along these lines and they even followed Christensen’s advice (which he gave about a decade later) that they should do it outside of their “stifling” corporate walls. Another innovator, previously-successful and very-successful-in-the-future, who followed the same vision, was the aforementioned Steve Jobs, launching in 1988 his biggest failure, the NeXT Workstation (the IBM report talks about NeXT-like workstations as the only access device to the global information utility, never mentioning PCs, or laptops, or mobile phones).

The vision of “let’s-use-a-heavy-duty-access-device-to-find-or-get-costly-information-from-centralized-databases-running-on-top-of-an-expensive-network” was thwarted by one man, Tim Berners-Lee, and his 1989 invention, the World Wide Web.

Berners-Lee put the lipstick on the pig, lighting up with information the standardized, open, “non-robust,” and cheap Internet (which was – and still is – piggybacking on the “robust” global telephone network). The network and its specifications were absent from his vision which was focused on information, on what the end results of the IBM and AT&T visions were all about, i.e., providing people with easy-to-use tool for creating, sharing, and organizing information. As it turned out, the road to letting people plan their travel on their own was not through an expensive, pay-as-you-go information utility, but through a hypermedia browser and an open network only scientists (and other geeks such as IBM researchers) knew about in 1989.

The amazing thing is that the IBM researchers understood well the importance of hypermedia. The only computer company mentioned by name in the report is Apple and its Hypercard. IBM: “In the context of a global multi-media information utility, the hypermedia concept takes on an enhanced significance in that global hypermedia links may be created to allow users to navigate through and create new views and relations from separate, distributed data bases. A professional composing a hyper-document would imbed in it direct hyperlinks to the works he means to cite, rather than painfully typing in references. ‘Readers’ would then be able to directly select these links and see the real things instead of having to chase them through references. The set of all databases maintained on-line would thus form a hypernet of information on which the user’s workstation would be a powerful window.”

Compare this to Tim Berners-Lee writing in Weaving the Web: “The research community has used links between paper documents for ages: Tables of content, indexes, bibliographies and reference sections… On the Web… scientists could escape from the sequential organization of each paper and bibliography, to pick and choose a path of references that served their own interest.” There is no doubt that the future significance of hypermedia was an insanely great insight by the IBM researchers in 1989, including hinting at Berners-Lee’s great breakthrough which was to escape from (in his words) “the straightjacket of hierarchical documentation systems.”

But it was Berners-Lee, not IBM, that successfully translated his vision into a viable product (or, more accurately, three standards that spawned millions of successful products). Why? Because he looked at the future through different lens than IBM’s (or AOL’s).

Berners-Lee’s vision did not focus on the question of how you deliver information – the network – but on the question of how you organize and share it.  This, as it turned out, was the right path to realizing the visions of e-books, a flood of worthless information, and the elimination of all kinds of intermediaries. And because this road was taken by Berners-Lee and others, starting with Mosaic (the first successful browser), information became free and its creation shifted in big way from large, established media companies to individuals and small “new media” ventures. Because this road was taken, IT innovation in the last thirty years has been mainly in the consumer space, and the market for information services has been almost entirely consumer-oriented.

I’m intimately familiar with IBM-type visions of the late 1980s because I was developing similar ones for my employer at the time, Digital Equipment Corporation, most famously (inside DEC) my 1989 report, “Computing in the 1990s.” I predicted that the 1990s will give rise to “a distributed network of data centers, servers and desktop devices, able to provide adequate solutions (i.e., mix and match various configurations of systems and staff) to business problems and needs.” Believe me, this was quite visionary for people used to talk only about “systems.” (My report was incorporated in the annual business plan for the VAX line of mini-computers, the plan referring to them as “servers” for the first time).

In another report, on “Enterprise Integration,” I wrote: “Successful integration of the business environment, coupled with a successful integration of the computing environment, may lead to data overload. With the destruction of both human and systems barriers to access, users may find themselves facing an overwhelming amount of data, without any means of sorting it and capturing only what they need at a particular point in time. It is the means of sorting through the data that carry the potential for true Enterprise Integration in the 1990s.”  Not too bad, if I may say so myself, predicting Google when Larry Page and Sergey Brin were still in high-school.

And I was truly prescient in a series of presentations and reports in the early 1990s, arguing that the coming digitization of all information (most of it was in analog form at the time), is going to blur what were then rigid boundaries between the computer, consumer electronics, and media “industries.”

But I never mentioned the Internet in any of these reports. Why pay attention to an obscure network which I used a few times to respond to questions about my reports by some geeks at places with names like “Argonne National Laboratory,” when Digital had at the time the largest private network in the world, Easynet, and more than 10,000 communities of VAX Notes (electronic bulletin boards with which DEC employees – and authorized partners and customers and friendly geeks – collaborated and shared information)?

Of course, the only future possible was that of a large, expensive, global, multi-media, high-speed, robust network. Just like Easynet. Or IBM’s and AT&T’s private networks or the visions from other large companies of how the future of computing and telecommunications will be a nice and comforting extrapolation of their existing businesses.

The exception to these visions of computing of the late 1980s and early 1990s was the one produced by Apple in 1987, The Knowledge Navigator. It was also an extrapolation of the company’s existing business, and because of that, it portrayed a different future.

In contrast to IBM’s and AT&T’s (and DEC’s), it was focused on information and individuals, not on communication utilities and commercial enterprises. It featured a university professor, conducting his research work, investigating data and collaborating with a remote colleague, assisted by a talking, all-knowing “smart agent.” The global network was there in the background, but the emphasis was on navigating knowledge and a whole new way of interacting with computers by simply talking to them, as if they were humans.

We are not there yet, but Steve Jobs and Apple moved us closer in 2007 by introducing a new way (touch) for interacting with computers, packaged as phones, which also turned out to be the perfect access devices—much better than NeXT Workstations—to the global knowledge network, the Web.

Back in 1983, the Lisa failed to become a commercial success, the second such failure in a row for Apple. The innovative, visionary company almost fumbled the future. But then it found the right packaging, the right market, and the right pricing for its breakthrough human-computer interface: The Macintosh.

Originally published on Forbes.com

Posted in Apple, Futures, IBM, Xerox, Yesterday's Futures | Leave a comment

From Tabulating Machines to Machine Learning to Deep Learning

HollerithMachine

A Census Bureau clerk tabulates data using a Hollerith Machine (Source: US Census Bureau)

This week’s milestone in the history of technology is the patent that launched the ongoing quest to get machines to help us and them know more about our world, from tabulating machines to machine learning to deep learning (or today’s “artificial intelligence”).

On January 8, 1889, Herman Hollerith was granted a patent titled the “Art of Compiling Statistics.” The patent described a punched card tabulating machine which launched a new industry and the fruitful marriage of statistics and computer engineering—called “machine learning” since the late 1950s, and reincarnated today as “deep learning” (also popularly known today as “artificial intelligence”).

Commemorating IBM’s 100th anniversary in 2011, The Economist wrote:

In 1886, Herman Hollerith, a statistician, started a business to rent out the tabulating machines he had originally invented for America’s census. Taking a page from train conductors, who then punched holes in tickets to denote passengers’ observable traits (e.g., that they were tall, or female) to prevent fraud, he developed a punch card that held a person’s data and an electric contraption to read it. The technology became the core of IBM’s business when it was incorporated as Computing Tabulating Recording Company (CTR) in 1911 after Hollerith’s firm merged with three others.

In his patent application, Hollerith explained the use of his machine in the context of a population survey, highlighting its usefulness in the statistical analysis of “big data”:

The returns of a census contain the names of individuals and various data relating to such persons, as age, sex, race, nativity, nativity of father, nativity of mother, occupation, civil condition, etc. These facts or data I will for convenience call statistical items, from which items the various statistical tables are compiled. In such compilation the person is the unit, and the statistics are compiled according to single items or combinations of items… it maybe required to know the numbers of persons engaged in certain occupations, classified according to sex, groups of ages, and certain nativities. In such cases persons are counted according to combinations of items. A method for compiling such statistics must be capable of counting or adding units according to single statistical items or combinations of such items. The labor and expense of such tallies, especially when counting combinations of items made by the usual methods, are very great.

James Cortada in Before the Computer quotes Walter Wilcox of the U.S. Bureau of the Census:

While the returns of the Tenth (1880) Census were being tabulated at Washington, John Shaw Billings [Director of the Division of Vital Statistics] was walking with a companion through the office in which hundreds of clerks were engaged in laboriously transferring data from schedules to record sheets by the slow and heartbreaking method of hand tallying. As they were watching the clerks he said to his companion, “there ought to be some mechanical way of doing this job, something on the principle of the Jacquard loom.”

Says Cortada: “It was a singular moment in the history of data processing, one historians could reasonably point to and say that things had changed because of it. It stirred Hollerith’s imagination and ultimately his achievements.” Cortada describes the results of the first large-scale machine learning project:

The U.S. Census of 1890… was a milestone in the history of modern data processing…. No other occurrence so clearly symbolized the start of the age of mechanized data handling…. Before the end of that year, [Hollerith’s] machines had tabulated all 62,622,250 souls in the United States. Use of his machines saved the bureau $5 million over manual methods while cutting sharply the time to do the job. Additional analysis of other variables with his machines meant that the Census of 1890 could be completed within two years, as opposed to nearly ten years taken for fewer data variables and a smaller population in the previous census.

But the efficient output of the machine was considered by some as “fake news.” In 1891, the Electrical Engineer reported (quoted in Patricia Cline Cohen’s A Calculating People):

The statement by Mr. Porter [the head of the Census Bureau, announcing the initial count of the 1890 census] that the population of this great republic was only 62,622,250 sent into spasms of indignation a great many people who had made up their minds that the dignity of the republic could only be supported on a total of 75,000,000. Hence there was a howl, not of “deep-mouthed welcome,” but of frantic disappointment.  And then the publication of the figures for New York! Rachel weeping for her lost children and refusing to be comforted was a mere puppet-show compared with some of our New York politicians over the strayed and stolen Manhattan Island citizens.

A century later, no matter how efficiently machines learned, they were still accused of creating and disseminating “fake news.” On March 24, 2011, the U.S. Census Bureau delivered “New York’s 2010 Census population totals, including first look at race and Hispanic origin data for legislative redistricting.” In response to the census data showing that New York has about 200,000 less people than originally thought, Sen. Chuck Schumer said, “The Census Bureau has never known how to count urban populations and needs to go back to the drawing board. It strains credulity to believe that New York City has grown by only 167,000 people over the last decade.” Mayor Bloomberg called the numbers “totally incongruous” and Brooklyn borough president Marty Markowitz said “I know they made a big big mistake.” [the results of the 1990 census were also disappointing and were unsuccessfully challenged in court, according to the New York Times].

Complaints by politicians and other people not happy with learning machines have not slowed down the continuing advances in using computers in ingenious ways for increasingly sophisticated statistical analysis. But for many years after Hollerith’s invention and after tabulating machines became digital computers, the machines interacted with the world around them in a very specific, one-dimensional way. Kevin Maney in Making the World Work Better:

Hollerith gave computers a way to sense the world through a crude form of touch. Subsequent computing and tabulating machines would improve on the process, packing more information unto cards and developing methods for reading the cards much faster. Yet, amazingly, for six more decades computers would experience the outside world no other way.

Deep learning, the recently successful variant of machine learning (giving rise to the buzz around “artificial intelligence”), opened up new vistas for learning machines. Now they can “see” and “hear” the world around them, driving a worldwide race for producing the winning self-driving car and for planting everywhere virtual assistants—new applications in the age-old endeavor of combining statistical analysis with computer engineering, of getting machines to assist us in the processing, tabulating, and analysis of data.

Originally published on Forbes.com

Posted in Machine Learning | Leave a comment

Smart Machines Rising

SWITZERLAND-DAVOS-ECONOMY-POLITICS-MEET-WEFA number of this week’s milestones in the history of technology highlight the role of mechanical devices in automating work, augmenting human activities, and allowing many people to participate in previously highly-specialized endeavors—a process technology vendors like to call “democratization” (as in “democratization of artificial intelligence”).

On January 3, 1983, Time magazine put the PC on its cover as “machine of the year.” Time publisher John A. Meyers wrote: “Several human candidates might have represented 1982, but none symbolized the past year more richly, or will be viewed by history as more significant, than a machine: the computer.” In the cover article, Roger Rosenblatt wrote:

Inventions arise when they’re needed. This here screen and keyboard might have come along any old decade, but it happened to pop up when it did, right now, at this point in time, like the politicians call it, because we were getting hungry to be ourselves again. That’s what I think, buddy. “The most idealist nations invent most machines.” D.H. Lawrence said that. Great American, D.H.

O pioneer. Folks over in Europe have spent an awful lot of time, more than 200 years if you’re counting, getting up on their high Lipizzaners and calling us a nation of gears and wheels. But we know better. What do you say? Are you ready to join your fellow countrymen (4 million Americans can’t be wrong) and take home some bytes of free time, time to sit back after all the word processing and inventorying and dream the dear old dream? Stand with me here. The sun rises in the West. Play it, Mr. Dvorak. There’s a New World coming again, looming on the desktop. Oh, say, can you see it? Major credit cards accepted.

The Time magazine cover appeared just a few years after the appearance if the first PCs and the software written for the, devices that automated previous work and helped put it in the hands of non-specialists. For example, turning some accounting work—specifically, management accounting, the ongoing internal measurement of a company’s performance—into work that could be performed by employees without an accounting degree. This “democratization” of accounting not only automated previous work but greatly augmented management work, giving business executives new data about their companies and new ways to measure and predict future performance.

On January 2, 1979, Software Arts was incorporated by co-founders Dan Bricklin and Bob Frankston for the purpose of developing VisiCalc, the world’s first spreadsheet program, which will be published by a separate company, Personal Software Inc. (later named VisiCorp). VisiCalc will come to be widely regarded as the first “killer app” that turned the PC into a serious business tool. Dan Bricklin:

The idea for the electronic spreadsheet came to me while I was a student at the Harvard Business School, working on my MBA degree, in the spring of 1978. Sitting in Aldrich Hall, room 108, I would daydream. “Imagine if my calculator had a ball in its back, like a mouse…” (I had seen a mouse previously, I think in a demonstration at a conference by Doug Engelbart, and maybe the Alto). And “…imagine if I had a heads-up display, like in a fighter plane, where I could see the virtual image hanging in the air in front of me. I could just move my mouse/keyboard calculator around on the table, punch in a few numbers, circle them to get a sum, do some calculations, and answer ‘10% will be fine!'” (10% was always the answer in those days when we couldn’t do very complicated calculations…)

The summer of 1978, between first and second year of the MBA program, while riding a bike along a path on Martha’s Vineyard, I decided that I wanted to pursue this idea and create a real product to sell after I graduated.

Long before the invention of the PC and its work-enhancing programs, an ingenious mechanical device was applied to the work of painters and illustrators, people with unique and rare skills for capturing and preserving reality.

On January 7, 1839, the Daguerreotype photography process was presented to the French Academy of Sciences by Francois Arago, a physicist and politician. Arago told the Academy that it was “…indispensable that the government should compensate M. Daguerre, and that France should then nobly give to the whole world this discovery which could contribute so much to the progress of art and science.”

On March 5, 1839, another inventor, looking (in the United States, England, and France) for government sponsorship of his invention of the telegraph, met with Daguerre. A highly impressed Samuel F. B. Morse wrote to his brother: “It is one of the most beautiful discoveries of the age… No painting or engraving ever approached it.”

In late September 1839, as Jeff Rosenheim tells us in Art and the Empire Cityshortly after the French government (on August 19) publicly released the details of the Daguerreotype process, “…a boat arrived [in New York] with a published text with step-by-step instructions for creating the plates and making the exposures. Morse and others in New York, Boston, and Philadelphia immediately set about to build their cameras, find usable lenses, and experiment with the new invention.”

New Yorkers were ready for the Daguerreotype, already alerted to the “new discovery” by articles in the local press, such as the one in The Corsair on April 13, 1839, titled “The Pencil of Nature”: “Wonderful wonder of wonders!! … Steel engravers, copper engravers, and etchers, drink up your aquafortis, and die! There is an end to your black art… All nature shall paint herself — fields, rivers, trees, houses, plains, mountains, cities, shall all paint themselves at a bidding, and at a few moment’s notice.”

Another memorable phrase capturing the wonders of photography came from the pen (or pencil) of Oliver Wendell Holmes, who wrote in “The Stereoscope and the Stereograph” (The Atlantic Monthly, June 1859):

The Daguerreotype… has fixed the most fleeting of our illusions, that which the apostle and the philosopher and the poet have alike used as the type of instability and unreality. The photograph has completed the triumph, by making a sheet of paper reflect images like a mirror and hold them as a picture… [it is the] invention of the mirror with a memory…

The time will come when a man who wishes to see any object, natural or artificial, will go to the Imperial, National, or City Stereographic Library and call for its skin or form, as he would for a book at any common library… we must have special stereographic collections, just as we have professional and other special libraries. And as a means of facilitating the formation of public and private stereographic collections, there must be arranged a comprehensive system of exchanges, so that there may grow up something like a universal currency of these bank-notes, or promises to pay in solid substance, which the sun has engraved for the great Bank of Nature.

Let our readers fill out a blank check on the future as they like,—we give our indorsement to their imaginations beforehand. We are looking into stereoscopes as pretty toys, and wondering over the photograph as a charming novelty; but before another generation has passed away, it will be recognized that a new epoch in the history of human progress dates from the time when He who

never but in uncreated light
Dwelt from eternity—

took a pencil of fire from the hand of the “angel standing in the sun,” and placed it in the hands of a mortal.

In Civilization (March/April 1996), William Howarth painted for us the larger picture of the new industry in America: “Daguerreotypes introduced to Americans a new realism, a style built on close observation and exact detail, so factual it no longer seemed an illusion. … Hawthorne’s one attempt at literary realism, The House of the Seven Gables (1851), features a daguerreotypist who uses his new art to dispel old shadows: ‘I make pictures out of sunshine,’ he claims, and they reveal ‘the secret character with a truth that no painter would ever venture upon.’… By 1853 the American photo industry employed 17,000 workers, who took over 3 million pictures a year.”

A hundred and fifty years after what Holmes called the moment of the “triumph of human ingenuity,” the Metropolitan Museum of Art mounted an exhibition on the early days of Daguerreotypes in France. Said Philippe de Montebello, the director of the museum at the time: “The invention of the daguerreotype—the earliest photographic process—forever altered the way we see and understand our world. No invention since Gutenberg’s movable type had so changed the transmission of knowledge and culture, and none would have so great an impact again until the informational revolution of the late twentieth century.”

In the same year of the Metropolitan’s exhibition, 2003, more digital cameras than traditional film cameras were sold for the first time in the U.S. The “informational revolution” has replaced analog with digital, but it did not alter the idea of photography as invented by Nicéphore Niépce in 1822, and captured so well by the inimitable Ambrose Bierce in his definition of “photograph” (The Devil’s Dictionary, 1911): “A picture painted by the sun without instruction in art.”

And what’s today’s Machine of the Year? 2017 was the year the robots really, truly arrived, says Matt Simon in Wired:

They escaped the factory floor and started conquering big cities to deliver Mediterranean food. Self-driving cars swarmed the streets. And even bipedal robots—finally capable of not immediately falling on their faces—strolled out of the lab and into the real world. The machines are here, and it’s an exhilarating time indeed. Like, now Atlas the humanoid robot can do backflips. Backflips.

Originally published on Forbes.com

Posted in Artificial Intelligence, Photography, Robots | Leave a comment

The Analog-to-Digital Journey of Film

Cinématographe_Lumière.jpgTwo of this week’s milestones in the history of technology reflect the analog-to-digital transformation of film making and distribution.

On December 28, 1895, the first public screening of films at which admission was charged was held by the Lumière brothers at the Salon Indien du Grand Café in Paris. It featured ten short films, including their first film, Sortie des Usines Lumière à Lyon (Workers Leaving the Lumière Factory). Each film was 17 meters long, which, when hand cranked through a projector, ran approximately 50 seconds.

British photographer Eadweard Muybridge invented the first movie projector, the Zoopraxiscope, in 1879. On October 17, 1888, Thomas Edison filed a patent for the first movie projector, the “Optical Phonograph,” which projected images just 1/32-inch across. In his patent application, he wrote: “I am experimenting upon an instrument which does for the Eye what the phonograph does for the Ear.”

On May 9, 1893, Edison presented the Kinetoscope, the first film-viewing device, at the Brooklyn Institute of Arts and Sciences. The first film publicly shown on the system was Blacksmith Scene, the earliest known example of actors performing a role in a film.

By 1900, the Lumière brothers had produced 1,299 short movies. For the World Fair that year, they developed their new Lumière Wide format which, at 75mm wide, has held the record for over 100 years as the widest film format.

35mm film was the standard for making movies and distributing them for about a century. On June 18, 1999, Texas Instruments’ DLP Cinema projector technology was publicly demonstrated on two screens in Los Angeles and New York for the release of Lucasfilm’s Star Wars Episode I: The Phantom Menace. By December 2000, there were 15 digital cinema screens in the United States and Canada, 11 in Western Europe, 4 in Asia, and 1 in South America. By the end of 2016, almost all of the world’s cinema screen were digital.

The digital transformation of movie theaters also attracted new types of content. On December 30, 2006, a live HD broadcast from the Metropolitan Opera was transmitted for the first time to 100 movie theaters across North America plus others in Britain, Japan and one in Norway. Today, Live in HD transmissions are seen on more than 2,000 screens in 70 countries around the world.

The live broadcasts from the Metropolitan have been followed by similar ones from the Royal Opera House, Sydney Opera House, English National Opera and other opera, ballet and theater companies such as NT Live, Branagh Live, Royal Shakespeare Company, Shakespeare’s Globe, the Royal Ballet, Mariinsky Ballet, the Bolshoi Ballet and the Berlin Philharmoniker.

The range of offerings has expanded to include variety of music concerts, live sport events, documentaries and lectures, faith broadcasts, stand-up comedy, museum and gallery exhibitions, and TV specials such as the record-breaking Doctor Who fiftieth anniversary special The Day of The Doctor.

Opera, theater, ballet, sport, exhibitions, TV specials and documentaries are now established forms of what became to be known as “Event Cinema.” It is estimated that worldwide revenues of the Event Cinema industry will reach $1 billion by 2019.

LiveinHD

Originally published on Forbes.com

Posted in Digitization, Film | Leave a comment

The Knowledge Navigator

Two of this week’s milestones in the history of technology—the development of the first transistor at Bell Labs and the development of the Alto PC at Xerox PARC—connect to this year’s 30th anniversary of the Knowledge Navigator, a concept video about the future of computing used as a motivational tool by John Sculley, Apple’s CEO at the time.

“I wanted to inspire people, convince them that we were not at the end of creativity in computing, only at the very beginning of the journey,” Sculley told me earlier this year. Given Moore’s Law, Sculley’s said they were confident in 1987 that in the future they will be able to do multimedia, build communications into computers, perform simulations, and develop computers that will act as an intelligent assistant. The question was “how to present it to people so they will believe it will happen,” and the answer was a “concept video.”

The fundamental innovation that enabled what became to be known as “Moore’s Law” was demonstrated seventy years ago this week, on December 23, 1947. The plaque commemorating the invention of the first transistor at Bell Telephone Laboratories reads: “At this site, in Building 1, Room 1E455, from 17 November to 23 December 1947, Walter H. Brattain and John A. Bardeen—under the direction of William B. Shockley—discovered the transistor effect, and developed and demonstrated a point-contact germanium transistor. This led directly to developments in solid-state devices that revolutionized the electronics industry and changed the way people around the world lived, learned, worked, and played.”

In the years to follow, numerous inventions have been based on the ingenuity of engineers who figured out how to cram more and more transistors into an integrated circuit, a process that has made possible the continuing miniaturization of computing devices while at the same time providing them with more power to perform more tasks.

One such invention taking advantage of the ongoing process of better-faster-smaller was the Alto personal computer. On December 19, 1974, Butler Lampson at Xerox PARC sent a memo to his management asking for funding for the development of a number of Alto personal computers. He wrote: “If our theories about the utility of cheap, powerful personal computers are correct, we should be able to demonstrate them convincingly on Alto. If they are wrong, we can find out why.”

The Alto, inspired by Doug Engelbart’s “Mother of All Demos”—or the mother of all concepts videos/demonstrations about the future of computing—in turn inspired Steve Jobs when he first saw the Alto at Xerox PARC in December 1979. Jobs was specifically taken by the Alto’s graphical user interface (GUI) which was later used in Apple’s Lisa and Macintosh personal computers. “It was like a veil being lifted from my eyes,” Jobs told Walter Isaacson, “I could see what the future of computing was destined to be.”

In 1986, Jobs was no longer with Apple and was busy exploring the future of computing with the NeXT Computer. Apple was “on the upswing,” according to Sculley. But Apple Fellow Alan Kay told him “next time we won’t have Xeorx.” By that Kay probably meant that they were missing Steve Jobs’ talent for recognizing emerging technologies and their potential to become successful products and his talent for creating a compelling and motivating vision based on his insights. “I believed it was important to show people that Apple can still be creative after Steve left,” says Sculley.

A year of surveying emerging technologies and ideas by visiting universities and research laboratories and engaging in discussions with Apple engineers culminated in the Knowledge Navigator video, in which “we tried to map out what might seem obvious to everybody in twenty years,” says Sculley. He describes it as a vision for a world of interactive multimedia communications where computation became just a commodity enabler and knowledge applications would be accessed by smart agents working over networks connected to massive amounts of digitized information.

The Knowledge Navigator helped attract and retain talent, according to Sculley, and it inspired a number of projects and products—Quicktime, a desktop multimedia presentation software for the Macintosh; Hypercard, the first truly interactive scripting software that could be used without knowledge of programming; and the Newton, the first hand-held computing device or Personal Digital Assistant (PDA) as the product category was called at the time. The Newton itself was not successful, but it housed the ARM processor (the type that powers smartphones today) which Apple co-developed. The company later sold its ARM license for $800 million.

Thirty years later, talking “knowledge navigators” or intelligent assistants—Amazon’s Alexa, Google Now, Apple’s Siri, Microsoft’s Cortana—are finally a reality, but they are still far from displaying the versatility and understanding (i.e., intelligence) of the visionary Knowledge Navigator.

But Sculley is still the enthusiastic marketer he has been all his working life (“in marketing, perception always leads reality”), predicting that by 2025 “this kind of technology will be useful and indispensable, and will have high impact on how work is done” and the re-invention of healthcare and education.

Following his convictions about the future, Sculley has invested in or has been involved with applying big data analytics and artificial intelligence to re-inventing work and healthcare, with startups such as Zeta Global, RxAdvance, and People Ticker. “We no longer live in linear time,” says Sculley.

It’s also possible that we have never lived in linear time, something which makes predictions, especially about the future, difficult. As Peter Denning wrote in the Communications of the ACM (September 2012): “Unpredictability arises not from insufficient information about a system’s operating laws, from inadequate processing power, or from limited storage. It arises because the system’s outcomes depend on unpredictable events and human declarations [society’s support for the adoption of specific technologies]. Do not be fooled into thinking that wise experts or powerful machines can overcome such odds.” Or as Bob Metcalfe wrote in Internet Collapses: “it’s relatively easy to predict the future. It’s harder to make precise predictions. And it’s hardest to get the timing right.”

Still, no matter how many predictions go wrong or maybe because so many predictions go wrong, we continue to seek—and create—guideposts, potential alternatives, and inspiring visions.

In Beyond Calculation: The Next Fifty Year of Computing, a collection of essays which Denning and Metcalfe edited in 1997, they wrote that they hoped their product will be not about predictions, but about developing “possibilities, raise issues, and enumerate some of the choices we will face about how information technology will affect us in the future.”

Originally published on Forbes.com

Posted in Apple, Computer history | Leave a comment

Computer Networking: The Technology Trend Driving the Success of Apple, Microsoft and Google

ethernet_MetcalfeBoggs

Bob Metcalfe (left) and Dave Boggs, inventors of Ethernet

This week’s milestones in the history of technology reveal the most important—and quite neglected—technology trend contributing to the success of Apple, Google, and Microsoft, ultimately making them today’s three most valuable US companies.

Apple and Microsoft are considered the premier—and long-lasting—examples of what has become to be known as the “PC era,” the former focusing on a self-contained (and beautifully designed) system of hardware and software, the latter on software which became the de-facto standard for PCs.

On December 17, 1974, the January 1975 issue of Popular Electronics hit the newsstands. Its cover featured the Altair 8800, the “World’s First Minicomputer Kit to Rival Commercial Models.” In fact, it was the first commercially successful microcomputer (i.e., PC) kit and the start of what became to be known as the “personal computing revolution.”

Les Solomon, a Popular Electronics editor, agreed to feature the kit on the cover of the always-popular January issue when Ed Roberts, co-founder of Micro Instrumentation and Telemetry Systems (MITS), suggested to him he would build a professional-looking, complete kit, based on Intel’s 8080 chip.  Stan Veit: “Roberts was gambling that with the computer on the cover of Popular Electronics, enough of the 450,000 readers would pay $397 to build a computer, even if they didn’t have the slightest idea of how to use it.”

MITS needed to sell 200 kits to breakeven but within a few months it sold about 2,000 just through Popular Electronics. Veit: “That is more computers of one type than had ever been sold before in the history of the industry.”

Visiting his high-school friend Bill Gates, then a student at Harvard University, Paul Allen, then a programmer with Honeywell, saw the January 1975 issue of Popular Electronics at the Out of Town News newsstand at Harvard Square. Grasping the opportunity opened up by personal computers, and eager not to let others get to it first, the two developed a version of the BASIC programming language for the Altair in just a few weeks. In April 1975, they moved to MITS’ headquarters in Albuquerque, New Mexico, signing their contract with MITS “Paul Allen and Bill Gates doing business as Micro-Soft.”

PC kits like the Altair 8800 also gave rise to local gatherings of electronics hobbyists such as Silicon Valley’s Homebrew Computer Club which first met on March 5, 1975. Steve Wozniak presented to members of the club his prototype for a fully assembled PC which in July 1976 went on sale as the Apple I. On December 12, 1980, Apple Computer went public in the largest IPO since Ford Motor Company went public in 1956. Originally priced to sell at $14 a share, the stock opened at $22 and all 4.6 million shares were sold almost immediately. The stock rose almost 32% that day to close at $29, giving the company a valuation of $1.78 billion. In August 2012, Apple became the most valuable company in history in terms of market capitalization, at $620 billion.

The PCs on which both Apple and Microsoft built their early fortunes were not “revolutionary” as they were a mainframe-on-a-desk, resembling in their conception and architecture the much larger machines that have dominated the computer industry for the previous three decades. They did, however, give rise to new desktop applications (e.g., spreadsheets) which contributed greatly to increased personal productivity. But within the confines of a stand-alone PC, their impact was limited.

The real potential of the PC to improve work processes and make a profound impact on productivity was realized only in the mid-1990s with the commercial success of Local Area Networks (LANs) or the linking of PCs in one building-wide or campus-wide network. That breakthrough invention saw the light of day in the early 1970s at Xerox Parc. Later, on December 13, 1977, Bob Metcalfe, David Boggs, Charles Thacker, and Butler Lampson received a patent for the Ethernet, titled “Multipoint Data Communication System with Collision Detection.”

Today, Ethernet is the dominant networking technology, linking computers not just locally but also over long distances, in a Wide-Area Network (WAN). The best-known WAN today is the Internet, a network of networks linking billions of devices all around the world. Limited mostly to academic research in its first twenty-five years, the Internet’s real potential was realized by the 1989 invention of the World-Wide Web, software running over the Internet that facilitated a higher level of linking between numerous content elements (documents, photos, videos).

On December 14, 1994, the Advisory Committee of the World-Wide Web Consortium (W3C) met for the first time at MIT. The Web’s inventor, Tim Berners-Lee, in Weaving the Web: “The meeting was very friendly and quite small with only about twenty-five people. Competitors in the marketplace, the representatives came together with concerns over the potential fragmentation of HTML…if there was any centralized point of control, it would rapidly become a bottleneck that would restrict the Web’s growth and the Web would never scale up. Its being ‘out of control’ was very important.”

The out of control nature of the Web allowed for the emergence of new companies seeking to benefit from its success in driving further proliferation of computing. The Web has moved the computer from mostly an enterprise productivity-improvement tool to becoming the foundation for myriad of innovations impacting consumers and their daily lives. Google (now Alphabet) was one of these innovations, becoming the best guide to the ever-increasing volumes of linked information on the Web.

Today, Apple is the most valuable US company at $870 billion, followed by Alphabet at $724 billion and Microsoft at $649 billion.

Originally published on Forbes.com

Posted in Computer history, Computer Networks | Leave a comment

The Origins of the Open Internet and Net Neutrality

arpanet_map71

ARPAnet in 1971 (Source: Larry Roberts’ website)

Two of this week’s milestones in the history of technology highlight the foundations laid 50 years ago that are at the core of today’s debates over net neutrality and the open Internet.

On December 6, 1967, the Advanced Research Projects Agency (ARPA) at the United States Department of Defense issued a four-month contract to Stanford Research Institute (SRI) for the purpose of studying the “design and specification of a computer network.” SRI was expected to report on the effects of selected network tasks on Interface Message Processors (today’s routers) and “the communication facilities serving highly responsive networks.”

The practical motivation for the establishment of what became known later as the Internet was the need open up and connect isolated and proprietary communication networks. When Robert Taylor became in February 1966 the director of the Information Processing Techniques Office (IPTO) at ARPA he found out that each scientific research project his agency was sponsoring required its own specialized terminal and unique set of user commands. Most important, while computer networks benefited the scientists collaborating on each project, creating project-specific communities, there was no way to extend the collaboration across scientific communities. Taylor proposed to his boss the ARPAnet, a network that will connect the different projects that ARPA was sponsoring.

What Taylor and his team envisioned was an open and decentralized network as opposed to a closed network that is managed from one central location. In early 1967, at a meeting of ARPA’s principal investigators at Ann Arbor, Michigan, Larry Roberts, the ARPA network program manager, proposed his idea for a distributed ARPAnet as opposed to a centralized network managed by a single computer.

Roberts’ proposal that all host computers would connect to one another directly, doing double duty as both research computers and networking routers, was not endorsed by the principal investigators who were reluctant to dedicate valuable computing resources to network administration. After the meeting broke, Wesley Clark, a computer scientist at Washington University in St. Louis, suggested to Roberts that the network be managed by identical small computers, each attached to a host computer. Accepting the idea, Roberts named the small computers dedicated to network administration ‘Interface Message Processors’ (IMPs), which later evolved into today’s routers.

In October 1967, at the first ACM Symposium on Operating Systems Principles, Roberts presented “Multiple computer networks and intercomputer communication,” in which he describes the architecture of the “ARPA net” and argues that giving scientists the ability to explore data and programs residing in remote locations will reduce duplication of effort and result in enormous savings: “A network will foster the ‘community’ use of computers. Cooperative programming will be stimulated, and in particular fields or disciplines it will be possible to achieve ‘critical mass’ of talent by allowing geographically separated people to work effectively in interaction with a system.”

In August of 1968, ARPA sent out a RFQ to 140 companies, and in December 1968, awarded the contract for building the first 4 IMPs to Bolt, Beranek and Newman (BBN).These will become the first nodes of the network we know today as the Internet.

The same month the contract was awarded, on December 9, 1968, SRI’s Doug Engelbart demonstrated the oNLine System (NLS) to about one thousand attendees at the Fall Joint Computer Conference held by the American Federation of Information Processing. With this demonstration, Engelbart took the decentralized and open vision of the global network a step further, showing what could be done with its interactive, real-time communications.

The demonstration introduced the first computer mouse, hypertext linking, multiple windows with flexible view control, real-time on-screen text editing, and shared-screen teleconferencing. Engelbart and his colleague Bill English, the engineer who designed the first mouse, conducted a real-time demonstration in San Francisco with co-workers connected from his Augmentation Research Center (ARC) at SRI’s headquarters in Menlo Park, CA. The inventions demonstrated were developed to support Engelbart’s vision of solving humanity’s most important problems by harnessing computers as tools for collaboration and the augmentation of our collective intelligence.

The presentation later became known as “the mother of all demos,” first called so by Steven Levy in his 1994 book, Insanely Great: The Life and Times of Macintosh, the Computer That Changed Everything.

Engelbart’s Augmentation Research Center was sponsored by Robert Taylor, first at NASA and later at ARPA. In an interview with John Markoff in 1999, Taylor described the prevailing vision in 1960s of the Internet as regulated public utility:

The model that some people were pushing in those days for how this was going to spread was that there were going to be gigantic computer utilities. This was the power utility model. I never bought that. By the late 60’s, Moore’s Law was pretty obvious. It was just a matter of time before you could afford to put a computer on everyone’s desk.

Technology and the businesses competing to take advantage of its progress, it turned out, made sure the decentralized and open nature of the Internet would be sustained without turning it into a regulated utility. That also encouraged innovation not only in terms of the underlying technologies, but also and in building additional useful layers on top of the open network. Robert Taylor told Markoff in 1999:

I was sure that from the early 1970’s, all the pieces were there at Xerox and at ARPA to put the Internet in the state by the early ’80’s that it is in today [1999]. It was all there. It was physically there. But it didn’t happen for years.

What did happen was Tim Berners-Lee, who in 1989 invented the Web, a decentralized (as opposed to what he called “the straightjacket of hierarchical documentation systems”), open software running on top of the Internet that transformed it from a collaboration tool used by scientists to a communication tool used by close to 4 billion people worldwide.

See also A Very Short History Of The Internet And The Web

Posted in Internet, Internet access | Leave a comment

The Turing Test and the Turing Machine

turing_SchoolReport

Alan Turing’s school report where a physics teacher noted “He must remember that Cambridge will want sound knowledge rather than vague ideas.” Source: SkyNews

This week’s milestones in the history of technology include Microsoft unleashing MS-DOS and Windows, the first Turing Test and the introduction of the Turing Machine, and IBM launching a breakthrough in computer storage technology.

Read the article on Forbes.com

Posted in Computer history | Leave a comment