The History of the Integrated Circuit
the Integrated Circuit Generations
What is a Microchip?
How do microchips work? How are microchips made?
By definition the integrated circuit aka microchip is a set of interconnected electronic components such as transistors and resistors, that are etched or imprinted on a onto a tiny chip of a semiconducting material, such as silicon or germanium.
The History of the Integrated Circuit
Jack Kilby and Robert Noyce
It seems that the integrated circuit was destined to be invented. Two separate inventors, unaware of each other's activities, invented almost identical integrated circuits or ICs at nearly the same time.
Jack Kilby, an engineer with a background in ceramic-based silk screen circuit boards and transistor-based hearing aids, started working for Texas Instruments in 1958. A year earlier, research engineer Robert Noyce had co-founded the Fairchild Semiconductor Corporation. From 1958 to 1959, both electrical engineers were working on an answer to the same dilemma: how to make more of less.
Why the Integrated Circuit Was Needed
In designing a complex electronic machine like a computer it was always necessary to increase the number of components involved in order to make technical advances. The monolithic (formed from a single crystal) integrated circuit placed the previously separated transistors, resistors, capacitors and all the connecting wiring onto a single crystal (or 'chip') made of semiconductor material. Kilby used germanium and Noyce used silicon for the semiconductor material.
In 1961 the first commercially available integrated circuits came from the Fairchild Semiconductor Corporation. All computers then started to be made using chips instead of the individual transistors and their accompanying parts. Texas Instruments first used the chips in Air Force computers and the Minuteman Missile in 1962. They later used the chips to produce the first electronic portable calculators. The original IC had only one transistor, three resistors and one capacitor and was the size of an adult's pinkie finger. Today an IC smaller than a penny can hold 125 million transistors.
• SSI, MSI and LSI
The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), digital circuits containing transistors numbering in the tens provided a few logic gates for example, while early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The term Large Scale Integration was first used by IBM scientist Rolf Landauer when describing the theoretical concept, from there came the terms for SSI, MSI, VLSI, and ULSI.
SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology, while the Minuteman missile forced it into mass-production.
These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars). They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.
The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).
They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work...
Please join StudyMode to read the full document