For millions of people around the globe, the Internet is a simple fact of life. We take for granted the invisible network that enables us to communicate, navigate, investigate, flirt, shop, and play. Early on, this network-of-networks connected only select companies and university campuses. Nowadays, it follows almost all of us into the most intimate areas of our lives. And yet, very few people know how the Internet became social.
Perhaps that’s because most histories of the Internet focus on technical innovations: packet switching, dynamic routing, addressing, and hypertext, for example. But when anyone other than a network engineer talks about the Internet, he or she is rarely thinking about such things. For most folks, the Internet is principally a medium through which we chat with friends, share pictures, read the news, and do our shopping. Indeed, for those who’ve been online only for the last decade or so, the Internet is just social media’s plumbing—a vital infrastructure that we don’t think much about, except perhaps when it breaks down.
To understand how the Internet became a medium for social life, you have to widen your view beyond networking technology and peer into the makeshift laboratories of microcomputer hobbyists of the 1970s and 1980s. That’s where many of the technical structures and cultural practices that we now recognize as social media were first developed by amateurs tinkering in their free time to build systems for computer-mediated collaboration and communication.
For years before the Internet became accessible to the general public, these pioneering computer enthusiasts chatted and exchanged files with one another using homespun “bulletin-board systems” or BBSs, which later linked a more diverse group of people and covered a wider range of interests and communities. These BBS operators blazed trails that would later be paved over in the construction of today’s information superhighway. So it takes some digging to reveal what came before.
How did it all start? During the snowy winter of 1978, Ward Christensen and Randy Suess, members of the Chicago Area Computer Hobbyist’s Exchange (CACHE), began to assemble what would become the best known of the first small-scale BBSs. Members of CACHE were passionate about microcomputers, at the time an arcane endeavor, and so the club’s newsletters were an invaluable source of information. Christensen and Suess’s novel idea was to put together an online archive of club newsletters using a custom-built microcomputer and a hot new Hayes modem they had acquired.
This modem included an auto-answer feature, to which Christensen and Suess added a custom hardware interface between the modem and the hard-reset switch. Every time the telephone rang, the modem would detect the incoming call and then “cold boot” their system directly into a special host program written in Intel 8080 assembly language. Restarting the system with every call offered a blunt but effective means of recovering from hardware and software crashes—a common occurrence on home-brew hardware of the time.
Once a connection was established, the host program welcomed users to the system, provided a list of articles to read, and invited them to leave messages. Christensen and Suess dubbed the system “Ward and Randy’s Computerized Bulletin Board System,” or CBBS. It was, as the name suggested, an electronic version of the community bulletin boards that you still see in libraries, supermarkets, cafés, and churches.
Anyone with access to a teletype or video terminal could dial into CBBS. And after a few months, a small but lively community began to form around the system. In the hobbyist tradition of sharing information, Christensen and Suess wrote up a report about their project titled “Hobbyist Computerized Bulletin Board,” which appeared in the November 1978 issue of the influential computer magazine Byte.
The article provided details about the hardware they used and how they organized and implemented their software. The authors even included their phone numbers and invited readers to take CBBS for a spin. Acknowledging the experimental nature of the system, they encouraged readers to “feel free to hang up and try several times if you have problems.” After the issue hit newsstands, calls to their computer started pouring in.
Over the next few years, hundreds of small-scale systems like CBBS popped up around the country. Perhaps inspired by the Byte article, many of these new systems were organized by local computer clubs. In 1983, TAB Books, publisher of numerous DIY electronics guides, published How to Create Your Own Computer Bulletin Board, by Lary L. Myers. In addition to explaining the concept and motivation behind online services, Myers’s book included complete source code in the BASIC programming language for host software. The back of the book also listed the telephone numbers of more than 275 public bulletin-board systems in 43 U.S. states. Some charged a nominal membership fee, while most were free to use. The roots of social media were beginning to take hold.
In retrospect, 1983 proved to be a critical year for popular computing. In France, the state-sponsored Minitel system completed its first full year of operation in Paris, making online news, shopping, and chat accessible to every citizen in that city. In the United States, novel commercial systems gained traction, with CompuServe reporting more than 50,000 paying subscribers.
Even Hollywood took interest in cyberspace. The 1983 movie WarGames, featuring a teenage hacker who explored remote computer networks from his bedroom, became an unlikely box-office smash. Although the IMSAI microcomputer and acoustic-coupler modem depicted in the movie once cost as much as a cheap used car, curious computer users inspired by the film could buy serviceable alternatives at the nearest Radio Shack for roughly the cost of a good-quality hi-fi stereo. And as the decade progressed, the online universe expanded rapidly from its original core of microcomputer hobbyists to encompass a much wider group.
In 1904, Tesla, determined to see his idea come to fruition, wrote with absolute certainty that “when wireless is fully applied, the earth will be converted into a huge brain, capable of response in every one of its parts.”
In the late 1950s, the Radio Corporation of America thought it had a lock on the self-driving car. The January 1958 issue of Electronic Age, RCA’s quarterly magazine, featured its vision of the “highway of the future”:
“You reach over to your dashboard and push the button marked ‘Electronic Drive.’ Selecting your lane, you settle back to enjoy the ride as your car adjusts itself to the prescribed speed. You may prefer to read or carry on a conversation with your passengers—or even to catch up on your office work. It makes no difference for the next several hundred miles as far as the driving is concerned.
“Fantastic? Not at all. The first long step toward this automatic highway of the future was successfully illustrated by RCA and the State of Nebraska on October 10, 1957, on a 400-foot strip of public highway on the outskirts of Lincoln.”
Two and a half years later, reporters got to experience this “highway of the future” for themselves, on a test track in Princeton, N.J. Cars drove themselves around the track, using sensors on their front bumpers to detect an electrical cable embedded in the road. The cable carried signals warning of obstructions ahead (like road work or a stalled vehicle), and the car could autonomously apply its brakes or switch lanes as necessary. A special receiver on the dashboard would interrupt the car’s own radio to announce information about upcoming exits.
Pictured above is an experimental autonomous car from General Motors, in which the steering wheel and pedals had been replaced with a small joystick and an emergency brake. Meters on the dashboard displayed the car’s speed as well as the distance to the car in front.
The system was the brainchild of renowned RCA engineer Vladimir Zworykin,who was better known for his pioneering work on television. In a 1975 oral history interview with the IEEE History Center, Zworykin explained his motivation for the autonomous highway: “This growing number of automobiles and people killed in accidents meant something should be done. My idea was that control of automobiles should be done by the road.” (Earlier inventors were similarly motivated by traffic fatalities; see, for example, IEEE Spectrum’s recent article on Charles Adler Jr., who came up with an automatic speed-control system for cars in the late 1920s.)
According to the RCA vision, it would be just a decade or two until all highway driving was autonomous, with human drivers taking over only when their exit approached. Well over half a century later, we’re of course just starting to get comfortable with autonomous vehicles on highways, and the problem of reliably transitioning between autonomous and human control still hasn’t been solved.
“We stayed about 80 to 100 feet behind the other car, and when it stopped we pulled up smoothly to a halt 20 feet behind it. Then the demonstrator had the first car go around to the other side of the track and stop. Then he activated our automatic works and we started zooming around the single-lane oval.
“ ‘Now,’ said the demonstrator, ‘let’s suppose we’re cruising normally down the highway around a blind curve and we don’t know a car is stopped in front of us and—oops!’
“Our automatic auto apparently didn’t know it either. The demonstrator said later he probably had forgotten to flip a switch. As we sped dangerously close, he had to flip back to manual operation and apply the brakes the old-fashioned way, with a foot. We didn’t hit.
“Nobody screamed. But the age of automation can have its moments.”
In July 1969, IEEE Spectrum published an article called The Electronic Highway, by Robert E. Fenton and Karl W. Olson, two engineers at Ohio State University who were working on ways to make vehicles operate autonomously when traveling on major highways. Nearly 50 years have passed, which is practically forever in a technological context, but what’s striking about this article is how many contextual similarities there are between the past and the present.
(For more about the history of intelligent transport, make sure to read our feature on Charles Adler, who was working on intelligent traffic control systems in the 1920s.)
The specific solutions that Fenton and Olson propose are a bit outdated, of course, but the problems that they discuss and the future that they look forward to have a lot in common with those peppering current discussions on vehicle autonomy. IEEE members can read the entire article here. We’ll take a look at some excerpts from it, and talk about what’s changed over the last half century, and what hasn’t.
As you read these excerpts, try to keep in mind that the article was published in 1969, and that the 1980s, a decade away, represented the distant future:
An examination of traffic conditions today—congested roadways, a large number of accidents and fatalities, extremely powerful automobiles—indicates the need for improvements in our highway system. Unfortunately, conditions will be much worse in the next decade, for it is predicted that the total number of vehicles registered in the United States in 1980 will be 62 percent greater in 1960, and 75 percent more vehicle miles will be traveled. If one should look further ahead to the turn of the century, he would see vast sprawling supercities, with populations characterized by adequate incomes, longer life-spans, and increased amounts of leisure time. One predictable result is greatly increased travel. The resulting traffic situation could be chaotic, unless some changes are instituted beforehand.
It is obvious that the traffic problems cannot be solved simply by building more and larger highways, for the for costs are too high, both in dollars and in the amount of land. Many alternative solutions have been suggested: high-speed surface rail transportation; a high-speed, electrically powered, air-cushioned surface transportation system… However, in the opinion of the writers, a majority of the of the public will not be satisfied with only city-to-city transit or even neighborhood-to-neighborhood transit via some form of public transportation. One needs only to witness the common use of private automobiles where such transit already exists. The role of a personal transportation unit is certainly justified by the mobility, privacy, and freedom afforded the occupants. It seems certain that this freedom, which dictates the spatial pattern of their lives, will not be relinquished.
I don’t know about adequate incomes or increased amounts of leisure time leading to greatly increased travel, but what’s definitely true is that more and more people are commuting longer and longer distances to get to work. The authors were certainly correct that the more time we spend in our vehicles, the more important autonomy becomes. At the same time, individual car ownership and usage is starting to get replaced by services that are more decentralized, but autonomy will enable that as well. It’s funny to see that mention of a “high-speed, electrically powered, air-cushioned surface transportation system”; it sounds like they were foretelling the Hyperloop.
In this light, one satisfactory solution would be the automation of individual vehicles. This approach has been examined by a number of researchers, for in addition to the retention of the individual transportation unit, it appears that considerable improvement in highway capacity and safety as well as a considerable reduction in driver effort can be achieved. However, there is an extremely large number of possible systems for achieving this goal—the writers have counted 1296—and great care must be exercised so that an optimum or near optimum one is chosen.
The approach described in this article involves the concept of a dual-mode system, whereby the vehicle (which must be specially equipped) is manually controlled on nonautomated roads and automatically controlled on automated ones.
The system that the authors suggest, which is typical for automated driving system ideas of that era, relies heavily on infrastructure built into the highway directly. The well-defined and highly structured environments of highways are where today’s autonomous cars both perform the best and are the most valuable. The authors of the 1969 article weren’t all that worried about automating other types of roads, because they figured that it simply wouldn’t be worth the hassle and expense. In the near term, this is where many autonomy projects are finding success as well.
However, highway autonomy that’s based on the highway itself is much harder to expand, and the total infrastructure overhaul needed to implement it in the first place wouldn’t be cheap. Even in 1969 dollars, it sounds expensive:
It is expected that with the introduction and extended use of microcircuits, it will be possible to install all necessary equipment in the vehicle for several hundred dollars. The total investment in computers and highway-based sensors would probably average anywhere from $20,000 to $200,000 per lane mile (about $12,000 to $120,000 per lane kilometer), depending on the form of the chosen system and future technological advances.
One can expect two principal returns from such an investment: greatly increased lane capacity at high speeds and a reduction in the number of highway accidents. Estimates of the former range up to 800 percent and would depend, of course, on the chosen system design. The expectation of fewer accidents arises from the fact that an electronic system can provide a shorter reaction time and greater consistency than a driver can.
It’s interesting that Fenton and Olson sound like they’re more focused on increasing capacity than on safety, which is the opposite of how most vehicle autonomy projects are presented right now. This is likely because the efficiency benefits require some critical mass of autonomous vehicles all working together. It’s much easier to achieve this with expensive automated highways and cheap automated vehicles, rather than ‘dumb’ highways that require each vehicle to have its own automation system, which is what we’ve got going now.
As for the vehicles themselves, the “near future” might provide something better than internal combustion, the authors hope:
It is probable that vehicles would be powered by the internal combustion engine; however, a number of other prime movers— the DC motor, the gas turbine, the steam engine, and the linear induction motor— may be available and practical in the near future. The eventual choice will probably be dictated by such factors as air pollution and the continuing availability of cheap fossil fuel.
One attractive possibility involves the use of electrically powered cars, which would be self-powered via batteries on non-automated roads and externally powered on automated ones. Here, power would be supplied through a pickup probe protruding from the vehicle, and control could be obtained by simply controlling the power flow.
Another important contemporary issue that the 1969 article mulls over: Should a human play any role at all in operating the autonomous vehicle?
An important question that is frequently raised is the advisability of allowing the driver to override the system. If he were able to do so, there would be a large measure of randomness in the system, which would be undesirable from a standpoint of both safety and system efficiency.
This remains one of the more difficult problems with consumer vehicle autonomy: unless you can promise 100-percent capability for your system, a human “driver” has to be involved. But how do you make sure that the human is alert and able to take over when necessary? Tesla reminds drivers to keep their hands on the wheel, and will begin to gradually slow the car if you persistently ignore it. Google, on the other hand, doesn’t trust humans even that far, which is why it’s working on autonomous cars without steering wheels.
The authors of the 1969 article have a slightly different perspective, but their concern about the “undesirable randomness” of human drivers foreshadows the day (it’s getting closer, we promise) when the most dangerous car on the road will be the one driven manually by a human.
There is also a need for communication links to a central station so that there will be a complete “picture” of the traffic state at all times.
There seems little question that vehicle automation is technologically feasible; however, a tremendous amount of effort in both research and development will be required before a satisfactory automatic system is in operation. This effort must involve not only vehicle-control studies, but also an intensive investigation of the present driver-vehicle complex, since the knowledge gained will be necessary for the proper specification and introduction of the control system components. Further, the need exists for intensive overall system studies so that optimum strategies can be chosen for headway spacing control, merging and lane changing, and the interfacing of automated highways with other modes of future transportation.
The authors’ conclusion is as true now as it was in 1969. Since then, the focus has changed somewhat, with highway autonomy seen as just a step towards full autonomy—and not necessarily a step that can be taken independently. Technology has improved to the point where it’s now feasible to pack all of the sensors and computers needed for autonomy into vehicles themselves, rather than having to rely on external infrastructure. This makes the transition to autonomous vehicles more straightforward, partly because it’s something that can be motivated by the market rather than by the government.
At the same time, though, I certainly appreciate the vision that the authors had for expensive automated highways that the average driver could take advantage of with a relatively inexpensive car. Here in Washington, D.C., for example, adding autonomy infrastructure to the beltway alone would vastly improve the commuting experience for an enormous number of people on a daily basis—that is, if existing cars could be cheaply retrofitted with the technology. This isn’t the trajectory we’re on anymore, but it’s interesting to look back and think about what would have happened if we’d taken a different road.
Mark Fields, the chief executive of Ford Motor Company, said his company would sell completely self-driving cars by about 2025, after first providing them via ride-hailing service, in 2021.
Such cars would have “no steering wheel, no brake pedal,” he said. “Essentially a driver is not going to be required.”
At first these robocars will cost more than conventional cars, he admitted, but the ride-hailing application will make up for that by saving the salary of a professional driver. Later, the rising scale of production will lower the sticker price enough to justify offering the robocars for sale. Ford can make money either way.
“Now vehicle miles traveled are just as important as the number of vehicles sold,” Fields said.
As robocars proliferate and cities impose congestion fees and other measures to limit traffic, total car sales may well drop. “But you can also argue that autonomous vehicles will be running continuously and will rack up more miles—and that that will mean more replacement.”
Ford has begun framing itself as a mobility company rather than a mere car company, and it has emphasized the point recently by announcing ventures to provide cities with electric-bicycle services and shuttle services. Asked about recent drops in the company’s share prices—a sign that investors aren’t happy with a program that can only bear fruit a decade hence—Fields said his company wasn’t managed for the short run alone.
He quoted Wayne Gretzky, the famed Canadian hockey player: “You’ve got to skate to where the puck is going to go.”
Posing during induction ceremonies for the National Inventors Hall of Fame in 1996, Federico Faggin, Marcian “Ted” Hoff Jr., and Stanley Mazor [from left] show off the pioneering microprocessor they created in the early 1970s, the Intel 4004 Photo: Paul Sakuma/AP Photos
You thought it started with the Intel 4004, but the tale is more complicated
By Ken Shirriff
Posted 30 Aug 2016
Transistors, the electronic amplifiers and switches found at the heart of everything from pocket radios to warehouse-size supercomputers, were invented in 1947. Early devices were of a type called bipolar transistors, which are still in use. By the 1960s, engineers had figured out how to combine multiple bipolar transistors into single integrated circuits. But because of the complex structure of these transistors, an integrated circuit could contain only a small number of them. So although aminicomputer built from bipolar integrated circuits was much smaller than earlier computers, it still required multiple boards with hundreds of chips.
In 1960, a new type of transistor was demonstrated: the metal-oxide-semiconductor (MOS) transistor. At first this technology wasn’t all that promising. These transistors were slower, less reliable, and more expensive than their bipolar counterparts. But by 1964, integrated circuits based on MOS transistors boasted higher densities and lower manufacturing costs than those of the bipolar competition. Integrated circuits continued to increase in complexity, as described by Moore’s Law, but now MOS technology took the lead.
By the end of the 1960s, a single MOS integrated circuit could contain 100 or more logic gates, each containing multiple transistors, making the technology particularly attractive for building computers. These chips with their many components were given the label LSI, for large-scale integration.
Engineers recognized that the increasing density of MOS transistors would eventually allow a complete computer processor to be put on a single chip. But because MOS transistors were slower than bipolar ones, a computer based on MOS chips made sense only when relatively low performance was required or when the apparatus had to be small and lightweight—such as for data terminals, calculators, or avionics. So those were the kinds of computing applications that ushered in the microprocessor revolution.
Most engineers today are under the impression that the start of that revolution began in 1971 with Intel’s 4-bit 4004 and was immediately and logically followed by the company’s 8-bit 8008 chip. In fact, the story of the birth of the microprocessor is far richer and more surprising. In particular, some newly uncovered documents illuminate how a long-forgotten chip—Texas Instruments’ TMX 1795—beat the Intel 8008 to become the first 8-bit microprocessor, only to slip into obscurity.
What opened the door for the first microprocessors, then, was the application of MOS integrated circuits to computing. The first computer to be fashioned out of MOS-LSI chips was something called the D200, created in 1967 by Autonetics, a division of North American Aviation, located in Anaheim, Calif.
The US Navy recalls Grace Murray Hopper to active duty. From 1967 to 1977, Hopper served as the director of the Navy Programming Languages Group in the Navy’s Office of Information Systems Planning and was promoted to the rank of Captain in 1973. She developed validation software for COBOL and its compiler as part of a COBOL standardization program for the entire Navy to help develop the programming language COBOL.
The new language COBOL (COmmon Business-Oriented Language), first designed in 1959, extended Hopper’s FLOW-MATIC language with some ideas from the IBM equivalent, COMTRAN. Hopper’s belief that programs should be written in a language that was close to English (rather than in machine code or in languages close to machine code, such as assembly languages) was captured in the new business language, and COBOL went on to be the most ubiquitous business language to date.
Hopper made many major contributions to computer science throughout her very long career, including what is likely the first compiler ever written, “A-0.” She appears to have also been the first to coin the word “bug” in the context of computer science, taping into her logbook a moth which had fallen into a relay of the Harvard Mark II computer. She died on January 1, 1992.
Hopper has made many choice observations about the new profession she helped establish. Among them:
Programmers… arose very quickly, became a profession very rapidly, and were all too soon infected with certain amount of resistance to change.
Life was simple before World War II. After that, we had systems.
Bell Labs computer center 1968
August 3, 1960
Bell Laboratories scientists conduct a coast-to-coast telephone conversation by “bouncing” their voices off the Moon.
Space Shuttle Atlantis
August 4, 1991
The first email message is sent from space to earth. The Houston Chronicle reported:
Electronic mail networks, the message medium of the information age, made their space-age debut Sunday aboard the shuttle Atlantis as part of an effort to develop a communications system for a space station… Astronauts Shannon Lucid and James Adamson conducted the first experiment with the e-mail system Sunday afternoon, exchanging a test message with Marcia Ivins, the shuttle communicator at Johnson Space Center… The messages follow a winding path from the shuttle to a satellite in NASA’s Tracking Data Relay Satellite System to the main TDRSS ground station in White Sands, N.M., back up to a commercial communications satellite, then down to Houston, where they enter one or more computer networks… The shuttle tests are part of a larger project to develop computer and communications systems for the space station Freedom, which the agency plans to assemble during the late 1990s.
The first Vitaphone sound-on-disc film is debuted by Warner Bros. at the Warner Theatre in New York. The sound is recorded on a 16-inch disc, playing at 33rpm. The film, Don Juan, had great success at the box office, but failed to cover the expensive budget Warner Bros. put into the film’s production.
Berners-Lee message said, in part: “The WorldWideWeb (WWW) project aims to allow links to be made to any information anywhere… The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!”
In Weaving the Web, Berners-Lee wrote: “Putting the Web out on alt.hypertext was a watershed event. It exposed the Web to a very critical academic community… From then on, interested people on the Internet provided the feedback, stimulation, ideas, source-code contributions, and moral support that would have been hard to find locally. The people of the Internet built the Web, in true grassroots fashion.”
Four years later, in 1995, many were still skeptical of the Web’s potential, as this anecdote from Dr. Hellmuth Broda (in Pondering Technology) demonstrates:
I predicted at the Basler Mediengruppe Conference in Interlaken (50 Swiss newspapers and magazines) that classified ads will migrate to the web and that advertisement posters will soon carry URL’s. The audience of about 100 journalists burst into a roaring laughter. The speaker after me then reassured the audience that this “internet thing” is a tech freak hype which will disappear as fast as we saw it coming. Never–he remarked–people will go to the internet to search for classified ads and he also told that never print media will carry these ugly URL’s. Anyway the total readership of the Web in Switzerland at that time, as he mentioned, was less than that of the “Thuner Tagblatt,” the local newspaper of the neighboring town. It is interesting to note though that in 1998 (if my memory is correct) the same gentleman officially launched the first Swiss website for online advertisement and online classified ads (today SwissClick AG).
IBM Mark I (Automatic Sequence Controlled Calculator), exterior designed by Bel Geddes.
August 7, 1944
The IBM Automatic Sequence Controlled Calculator (ASCC)–also known as the Harvard Mark I–the largest electromechanical calculator ever built was officially presented to, and dedicated at, Harvard University. Martin Campbell-Kelly and William Aspray in Computer:
The dedication of the Harvard Mark I captured the imagination of the public to an extraordinary extent and gave headline writers a field day. American Weekly called it “Harvard Robot Super-Brain” while Popular Science Monthly declared “Robot Mathematician Knows All the Answers.”… The significance of this event was widely appreciated by scientific commentators and the machine also had an emotional appeal as a final vindication of Babbage’s life.
In 1864 [Charles] Babbage had written: “Half a century may probably elapse before anyone without those aids which I leave behind me, will attempt so unpromising a task.” Even Babbage had underestimated how long it would take…. [The ASCC] was perhaps only ten times faster than he had planned for the Analytical Engine. Babbage would never have envisioned that one day electronic machines would come into the scene with speeds thousands of times faster than he had ever dreamed. This happened within two years of the Harvard Mark I being completed.
IBM applied the lessons it learned about large calculator development in its own Selective Sequence Controlled Calculator (SSEC), a project undertaken when Howard Aiken angered IBM’s Thomas Watson Sr. at the ASCC announcement by not acknowledging IBM’s involvement and financial support (which included commissioning the industrial designer Norman Bel Geddes to give the calculator an exterior suitable to a “Giant Brain”). Thomas and Martha Belden in The Lengthening Shadow:
Few events in Watson’s life infuriated him as much as the shadow cast on his company’s achievement by that young mathematician. In time his fury cooled to resentment and desire for revenge, a desire that did IBM good because it gave him an incentive to build something better in order to capture the spotlight.
Chess playing robot, 2009
August 7, 1970
The first all-computer championship was held in New York and won by CHESS 3.0, a program written by Atkin and Gorlen at Northwestern University. Six programs had entered. The World Computer Chess Championship (WCCC) is today an annual event organized by the International Computer Games Association (ICGA).
The Atlantic Cable is successfully completed. The first working cable, completed in 1858, failed within a few weeks. Before it did, however, it prompted the biggest parade New York had ever seen and accolades that described the cable, as one newspaper said, as “next only in importance to the ‘Crucifixion.’”
Tom Standage quotes similar reactions in The Victorian Internet:
“The completion of the Atlantic Telegraph…has been the cause of the most exultant burst of popular enthusiasm that any even in modern times has ever elicited. The laying of the telegraph cable is regarded, and most justly, as the greatest event in the present century.”
And “Since the discovery of Columbus, nothing has been done in any degree comparable to the vast enlargement which has thus been given it the sphere of human activity.” Notes Standage:
A popular slogan suggested that the effect of the electric telegraph would be to “make muskets into candlesticks.” Indeed, the construction of a global telegraph network was widely expected… to result in world peace.
The successful installation of the cable in 1866, resulted in similar—and so familiar to us today—pronouncements. Writes Standage:
The hype soon got going again once it became clear, that this time, the transatlantic link was here to stay… [The cable] was hailed as “the most wonderful achievement of our civilization”… Edward Thornton, the British ambassador, emphasized the peacemaking potential of the telegraph. “What can be more likely to effect [peace] than a constant and complete intercourse between all nations and individuals in the world?” He asked.
A number of inventors in addition to Gray, including Charles Bourseul, Thomas Edison, and Alexander Graham Bell, worked on similar methods for transmitting a number of telegraph messages simultaneously over a single telegraph wire by using different audio frequencies or channels for each message. Their efforts to develop “acoustic telegraphy,” in order to reduce the cost of telegraph service, led to the invention of the telephone.
John Logie Baird gives the first public demonstration of his large screen television in the UK at the London Coliseum Variety Theatre. The television’s screen displays an image thirty by seventy inches, created by 2,100 lamps. The entire device is built into a small, wheeled trailer that can be moved on and off stage. The exhibition will continue for three weeks.
Two weeks earlier, on the roof of the Baird Company’s building, Guglielmo Marconi and other dignitaries watched a television play, “The Man with the Flower in his Mouth,” on the new 2100-lamp large screen in the canvas tent “theatre” set up for the occasion. Prime Minister Ramsay MacDonald, to whom Baird had gifted a deluxe home “Televisor” a few months earlier also tuned in to the broadcast at No. 10 Downing Street. Baird wrote in 1932:
The application of television to the cinema and places of public entertainment involves the use of a large screen, and considerable development work has been done in this direction. The broadcasting of the play “The Man with the Flower in his Mouth” was not only shown on the ordinary “Televisor” receivers but was also shown to a large audience on the roof of the Baird Long Acre premises on a screen 2 feet by 5 feet, and the same screen was shown in Paris, Berlin, and Stockholm; but while it attracted large audiences, the pictures could not in any way compare with the cinematograph, and the attraction was one of novelty. Since that time the screen has been so far developed that it is now approaching the perfection necessary to give full entertainment value apart from the curiosity attraction, and I believe that one of the largest fields for television lies in the cinema of the future.
July 28, 1981
IBM System/23 Datamaster
IBM announces its first desktop computer, the System/23 Datamaster. It was based on Intel’s 8086 16-bit processor and featured a viewing screen, up to 4.4MB of diskette storage, and Business Management Accounting and Word Processing programs. It was “designed to be taken out of the carton, set up, checked out and operated by first-time users.” At $9,830 (with optional word processing at an additional $1,100 to $2,200), the Datamaster was IBM’s lowest-priced small business system.
A month later, IBM introduced its flagship product for the personal computing market, the IBM PC.
July 30, 1959
Intel co-founders Gordon Moore (seated) and Robert Noyce in 1970.
Robert Noyce and Gordon Moore file a patent application for a semiconductor integrated circuit based on the planar process on behalf of the Fairchild Semiconductor Corp. The patent application will be challenged by a Texas Instruments (TI) application on behalf of Jack Kilby, but in 1969, the courts will rule in favor of Noyce and Moore.