
Posing during induction ceremonies for the National Inventors Hall of Fame in 1996, Federico Faggin, Marcian “Ted” Hoff Jr., and Stanley Mazor [from left] show off the pioneering microprocessor they created in the early 1970s, the Intel 4004 Photo: Paul Sakuma/AP Photos
You thought it started with the Intel 4004, but the tale is more complicated
Transistors, the electronic amplifiers and switches found at the heart of everything from pocket radios to warehouse-size supercomputers, were invented in 1947. Early devices were of a type called bipolar transistors, which are still in use. By the 1960s, engineers had figured out how to combine multiple bipolar transistors into single integrated circuits. But because of the complex structure of these transistors, an integrated circuit could contain only a small number of them. So although aminicomputer built from bipolar integrated circuits was much smaller than earlier computers, it still required multiple boards with hundreds of chips.
In 1960, a new type of transistor was demonstrated: the metal-oxide-semiconductor (MOS) transistor. At first this technology wasn’t all that promising. These transistors were slower, less reliable, and more expensive than their bipolar counterparts. But by 1964, integrated circuits based on MOS transistors boasted higher densities and lower manufacturing costs than those of the bipolar competition. Integrated circuits continued to increase in complexity, as described by Moore’s Law, but now MOS technology took the lead.
By the end of the 1960s, a single MOS integrated circuit could contain 100 or more logic gates, each containing multiple transistors, making the technology particularly attractive for building computers. These chips with their many components were given the label LSI, for large-scale integration.
Engineers recognized that the increasing density of MOS transistors would eventually allow a complete computer processor to be put on a single chip. But because MOS transistors were slower than bipolar ones, a computer based on MOS chips made sense only when relatively low performance was required or when the apparatus had to be small and lightweight—such as for data terminals, calculators, or avionics. So those were the kinds of computing applications that ushered in the microprocessor revolution.
Most engineers today are under the impression that the start of that revolution began in 1971 with Intel’s 4-bit 4004 and was immediately and logically followed by the company’s 8-bit 8008 chip. In fact, the story of the birth of the microprocessor is far richer and more surprising. In particular, some newly uncovered documents illuminate how a long-forgotten chip—Texas Instruments’ TMX 1795—beat the Intel 8008 to become the first 8-bit microprocessor, only to slip into obscurity.
What opened the door for the first microprocessors, then, was the application of MOS integrated circuits to computing. The first computer to be fashioned out of MOS-LSI chips was something called the D200, created in 1967 by Autonetics, a division of North American Aviation, located in Anaheim, Calif.
Reblogged this on Leaders in Pharmaceutical Business Intelligence (LPBI) Group.
LikeLike