Computer Networking: The Technology Trend Driving the Success of Apple, Microsoft and Google


Bob Metcalfe (left) and Dave Boggs, inventors of Ethernet

This week’s milestones in the history of technology reveal the most important—and quite neglected—technology trend contributing to the success of Apple, Google, and Microsoft, ultimately making them today’s three most valuable US companies.

Apple and Microsoft are considered the premier—and long-lasting—examples of what has become to be known as the “PC era,” the former focusing on a self-contained (and beautifully designed) system of hardware and software, the latter on software which became the de-facto standard for PCs.

On December 17, 1974, the January 1975 issue of Popular Electronics hit the newsstands. Its cover featured the Altair 8800, the “World’s First Minicomputer Kit to Rival Commercial Models.” In fact, it was the first commercially successful microcomputer (i.e., PC) kit and the start of what became to be known as the “personal computing revolution.”

Les Solomon, a Popular Electronics editor, agreed to feature the kit on the cover of the always-popular January issue when Ed Roberts, co-founder of Micro Instrumentation and Telemetry Systems (MITS), suggested to him he would build a professional-looking, complete kit, based on Intel’s 8080 chip.  Stan Veit: “Roberts was gambling that with the computer on the cover of Popular Electronics, enough of the 450,000 readers would pay $397 to build a computer, even if they didn’t have the slightest idea of how to use it.”

MITS needed to sell 200 kits to breakeven but within a few months it sold about 2,000 just through Popular Electronics. Veit: “That is more computers of one type than had ever been sold before in the history of the industry.”

Visiting his high-school friend Bill Gates, then a student at Harvard University, Paul Allen, then a programmer with Honeywell, saw the January 1975 issue of Popular Electronics at the Out of Town News newsstand at Harvard Square. Grasping the opportunity opened up by personal computers, and eager not to let others get to it first, the two developed a version of the BASIC programming language for the Altair in just a few weeks. In April 1975, they moved to MITS’ headquarters in Albuquerque, New Mexico, signing their contract with MITS “Paul Allen and Bill Gates doing business as Micro-Soft.”

PC kits like the Altair 8800 also gave rise to local gatherings of electronics hobbyists such as Silicon Valley’s Homebrew Computer Club which first met on March 5, 1975. Steve Wozniak presented to members of the club his prototype for a fully assembled PC which in July 1976 went on sale as the Apple I. On December 12, 1980, Apple Computer went public in the largest IPO since Ford Motor Company went public in 1956. Originally priced to sell at $14 a share, the stock opened at $22 and all 4.6 million shares were sold almost immediately. The stock rose almost 32% that day to close at $29, giving the company a valuation of $1.78 billion. In August 2012, Apple became the most valuable company in history in terms of market capitalization, at $620 billion.

The PCs on which both Apple and Microsoft built their early fortunes were not “revolutionary” as they were a mainframe-on-a-desk, resembling in their conception and architecture the much larger machines that have dominated the computer industry for the previous three decades. They did, however, give rise to new desktop applications (e.g., spreadsheets) which contributed greatly to increased personal productivity. But within the confines of a stand-alone PC, their impact was limited.

The real potential of the PC to improve work processes and make a profound impact on productivity was realized only in the mid-1990s with the commercial success of Local Area Networks (LANs) or the linking of PCs in one building-wide or campus-wide network. That breakthrough invention saw the light of day in the early 1970s at Xerox Parc. Later, on December 13, 1977, Bob Metcalfe, David Boggs, Charles Thacker, and Butler Lampson received a patent for the Ethernet, titled “Multipoint Data Communication System with Collision Detection.”

Today, Ethernet is the dominant networking technology, linking computers not just locally but also over long distances, in a Wide-Area Network (WAN). The best-known WAN today is the Internet, a network of networks linking billions of devices all around the world. Limited mostly to academic research in its first twenty-five years, the Internet’s real potential was realized by the 1989 invention of the World-Wide Web, software running over the Internet that facilitated a higher level of linking between numerous content elements (documents, photos, videos).

On December 14, 1994, the Advisory Committee of the World-Wide Web Consortium (W3C) met for the first time at MIT. The Web’s inventor, Tim Berners-Lee, in Weaving the Web: “The meeting was very friendly and quite small with only about twenty-five people. Competitors in the marketplace, the representatives came together with concerns over the potential fragmentation of HTML…if there was any centralized point of control, it would rapidly become a bottleneck that would restrict the Web’s growth and the Web would never scale up. Its being ‘out of control’ was very important.”

The out of control nature of the Web allowed for the emergence of new companies seeking to benefit from its success in driving further proliferation of computing. The Web has moved the computer from mostly an enterprise productivity-improvement tool to becoming the foundation for myriad of innovations impacting consumers and their daily lives. Google (now Alphabet) was one of these innovations, becoming the best guide to the ever-increasing volumes of linked information on the Web.

Today, Apple is the most valuable US company at $870 billion, followed by Alphabet at $724 billion and Microsoft at $649 billion.

Originally published on

Posted in Computer history, Computer Networks | Leave a comment

The Origins of the Open Internet and Net Neutrality


ARPAnet in 1971 (Source: Larry Roberts’ website)

Two of this week’s milestones in the history of technology highlight the foundations laid 50 years ago that are at the core of today’s debates over net neutrality and the open Internet.

On December 6, 1967, the Advanced Research Projects Agency (ARPA) at the United States Department of Defense issued a four-month contract to Stanford Research Institute (SRI) for the purpose of studying the “design and specification of a computer network.” SRI was expected to report on the effects of selected network tasks on Interface Message Processors (today’s routers) and “the communication facilities serving highly responsive networks.”

The practical motivation for the establishment of what became known later as the Internet was the need open up and connect isolated and proprietary communication networks. When Robert Taylor became in February 1966 the director of the Information Processing Techniques Office (IPTO) at ARPA he found out that each scientific research project his agency was sponsoring required its own specialized terminal and unique set of user commands. Most important, while computer networks benefited the scientists collaborating on each project, creating project-specific communities, there was no way to extend the collaboration across scientific communities. Taylor proposed to his boss the ARPAnet, a network that will connect the different projects that ARPA was sponsoring.

What Taylor and his team envisioned was an open and decentralized network as opposed to a closed network that is managed from one central location. In early 1967, at a meeting of ARPA’s principal investigators at Ann Arbor, Michigan, Larry Roberts, the ARPA network program manager, proposed his idea for a distributed ARPAnet as opposed to a centralized network managed by a single computer.

Roberts’ proposal that all host computers would connect to one another directly, doing double duty as both research computers and networking routers, was not endorsed by the principal investigators who were reluctant to dedicate valuable computing resources to network administration. After the meeting broke, Wesley Clark, a computer scientist at Washington University in St. Louis, suggested to Roberts that the network be managed by identical small computers, each attached to a host computer. Accepting the idea, Roberts named the small computers dedicated to network administration ‘Interface Message Processors’ (IMPs), which later evolved into today’s routers.

In October 1967, at the first ACM Symposium on Operating Systems Principles, Roberts presented “Multiple computer networks and intercomputer communication,” in which he describes the architecture of the “ARPA net” and argues that giving scientists the ability to explore data and programs residing in remote locations will reduce duplication of effort and result in enormous savings: “A network will foster the ‘community’ use of computers. Cooperative programming will be stimulated, and in particular fields or disciplines it will be possible to achieve ‘critical mass’ of talent by allowing geographically separated people to work effectively in interaction with a system.”

In August of 1968, ARPA sent out a RFQ to 140 companies, and in December 1968, awarded the contract for building the first 4 IMPs to Bolt, Beranek and Newman (BBN).These will become the first nodes of the network we know today as the Internet.

The same month the contract was awarded, on December 9, 1968, SRI’s Doug Engelbart demonstrated the oNLine System (NLS) to about one thousand attendees at the Fall Joint Computer Conference held by the American Federation of Information Processing. With this demonstration, Engelbart took the decentralized and open vision of the global network a step further, showing what could be done with its interactive, real-time communications.

The demonstration introduced the first computer mouse, hypertext linking, multiple windows with flexible view control, real-time on-screen text editing, and shared-screen teleconferencing. Engelbart and his colleague Bill English, the engineer who designed the first mouse, conducted a real-time demonstration in San Francisco with co-workers connected from his Augmentation Research Center (ARC) at SRI’s headquarters in Menlo Park, CA. The inventions demonstrated were developed to support Engelbart’s vision of solving humanity’s most important problems by harnessing computers as tools for collaboration and the augmentation of our collective intelligence.

The presentation later became known as “the mother of all demos,” first called so by Steven Levy in his 1994 book, Insanely Great: The Life and Times of Macintosh, the Computer That Changed Everything.

Engelbart’s Augmentation Research Center was sponsored by Robert Taylor, first at NASA and later at ARPA. In an interview with John Markoff in 1999, Taylor described the prevailing vision in 1960s of the Internet as regulated public utility:

The model that some people were pushing in those days for how this was going to spread was that there were going to be gigantic computer utilities. This was the power utility model. I never bought that. By the late 60’s, Moore’s Law was pretty obvious. It was just a matter of time before you could afford to put a computer on everyone’s desk.

Technology and the businesses competing to take advantage of its progress, it turned out, made sure the decentralized and open nature of the Internet would be sustained without turning it into a regulated utility. That also encouraged innovation not only in terms of the underlying technologies, but also and in building additional useful layers on top of the open network. Robert Taylor told Markoff in 1999:

I was sure that from the early 1970’s, all the pieces were there at Xerox and at ARPA to put the Internet in the state by the early ’80’s that it is in today [1999]. It was all there. It was physically there. But it didn’t happen for years.

What did happen was Tim Berners-Lee, who in 1989 invented the Web, a decentralized (as opposed to what he called “the straightjacket of hierarchical documentation systems”), open software running on top of the Internet that transformed it from a collaboration tool used by scientists to a communication tool used by close to 4 billion people worldwide.

See also A Very Short History Of The Internet And The Web

Posted in Internet, Internet access | Leave a comment

The Turing Test and the Turing Machine


Alan Turing’s school report where a physics teacher noted “He must remember that Cambridge will want sound knowledge rather than vague ideas.” Source: SkyNews

This week’s milestones in the history of technology include Microsoft unleashing MS-DOS and Windows, the first Turing Test and the introduction of the Turing Machine, and IBM launching a breakthrough in computer storage technology.

Read the article on

Posted in Computer history | Leave a comment

History of Human-Machine Interface


Source: Infographicszone

Posted in Computer history | 1 Comment

The Challenge of Automation, 1965


Kevin Maney, “Will AI Robots Turn Humans Into Pets?” April 2, 2017:

Here’s a question worth considering: Is this AI tsunami really that different from the changes we’ve already weathered? Every generation has felt that technology was changing too much too fast. It’s not always possible to calibrate what we’re going through while we’re going through it.

In January 1965, Newsweek ran a cover story titled “The Challenge of Automation.” It talked about automation killing jobs. In those days, “automation” often meant electro-mechanical contraptions on the order of your home dishwasher, or in some cases the era’s newfangled machines called computers. “In New York City alone,” the story said, “because of automatic elevators, there are 5,000 fewer elevator operators than there were in 1960.” Tragic in the day, maybe, but somehow society has managed without those elevator operators.

That 1965 story asked what effect the elimination of jobs would have on society. “Social thinkers also speak of man’s ‘need to work’ for his own well-being, and some even suggest that uncertainty over jobs can lead to more illness, real or imagined.” Sounds like the same discussion we’re having today about paying everyone a universal basic income so we can get by in a post-job economy, and whether we’d go nuts without the sense of purpose work provides.

Just like now, back then no one knew how automation was going to turn out. “If America can adjust to this change, it will indeed become a place where the livin’ is easy—with abundance for all and such space-age gadgetry as portable translators…and home phone-computer tie-ins that allow a housewife to shop, pay bills and bank without ever leaving her home.” The experts of the day got the technology right, but whiffed on the “livin’ is easy” part.

Posted in Automation | 1 Comment

The History of the Internet of Things According to William Belk

From early visionaries to futuristic applications, the Internet of Things was fueled by raw innovation in connectivity and robotics.

~1900: Radio Control

~1985: Consumer Cellular Phone

~1985: Electronic Toll Collection via Transmitter

~2000: WIFI

~2000: RFID Passports

Posted in Internet of things | Leave a comment

19th Century Selfies: The Countess of Castiglione


The Metropolitan Museum:

Virginia Oldoini (1837–1899), born to an aristocratic family from La Spezia, entered into an arranged and loveless marriage at age seventeen to Count Francesco Verasis di Castiglione. Widely considered to be the most beautiful woman of her day, the countess was sent to Paris in 1856 to bolster the interest of Napoleon III in the cause of Italian unification. She was instructed by her cousin, the minister Camillo Cavour, to “succeed by whatever means you wish—but succeed!” She caused a sensation at the French court and quickly—if briefly—became the emperor’s mistress. Separated from the husband she had bankrupted by her extravagances, she retreated to Italy in self-imposed exile in 1858. She returned to Paris in 1861, however, and once more became a glamorous and influential fixture of Parisian society, forming numerous liaisons with notable aristocrats, financiers, and politicians, while cultivating an image of a mysterious femme fatale.

In July 1856, the countess made her first visit to the studio of Mayer & Pierson, one of the most sought-after portrait studios of the Second Empire. Her meeting with Pierre-Louis Pierson led to a collaboration that would produce more than 400 portraits concentrated into three distinct periods—her triumphal entry into French society, 1856–57; her reentry into Parisian life, from 1861 to 1867; and toward the end of her life, from 1893 to 1895.

Fascinated by her own beauty, the countess would attempt to capture all its facets and re-create for the camera the defining moments of her life. Far from being merely a passive subject, it was she who decided the expressive content of the images and assumed the art director’s role, even to the point of choosing the camera angle. She also gave precise directions on the enlargement and repainting of her images in order to transform the simple photographic documents into imaginary visions—taking up the paintbrush herself at times. Her painted photographs are among the most beautiful examples of the genre.

While many of the portraits record the countess’ triumphant moments in Parisian society, wearing the extravagant gowns and costumes in which she appeared at soirées and masked balls, in others she assumes roles drawn from the theater, opera, literature, and her own imagination. Functioning as a means of self-advertisement as well as self-expression, they show the countess, by turns, as a mysterious seductress, a virginal innocent, and a charming coquette. Provided with titles of her own choosing, and often elaborately painted under her direction, these images were frequently sent to lovers and admirers as tokens of her favor. Unique in the annals of nineteenth-century photography, these works have been seen as forerunners to the self-portrait photography of later artists such as Claude Cahun, Pierre Molinier, and Cindy Sherman.


See also here and here and here

Posted in Photography | Tagged | Leave a comment