CSIS 550 History of Computing Professor Tim Bergin Technology Research Paper: Microprocessors
Beatrice A. Muganda AU ID: 0719604 May 3, 2001
EVOLUTION OF THE MICROPROCESSOR
The Collegiate Webster dictionary describes microprocessor as a computer processor contained on an integrated-circuit chip. In the mid-seventies, a microprocessor was defined as a central processing unit (CPU) realized on a LSI (large-scale integration) chip, operating at a clock frequency of 1 to 5 MHz and constituting an 8-bit system (Heffer, 1986). It was a single component having the ability to perform a wide variety of different functions. Because of their relatively low cost and small size, the microprocessors permitted the use of digital computers in many areas where the use of the preceding mainframe—and even minicomputers— would not be practical and affordable (Computer, 1996). Many non-technical people associate microprocessors with only PCs yet there are thousands of appliances that have a microprocessor embedded in them— telephone, dishwasher, microwave, clock radio, etc. In these items, the microprocessor acts primarily as a controller and may not be known to the user.
The Breakthrough in Microprocessors
The switching units in computers that were used in the early 1940s were the mechanical relays. These were devices that opened and closed as they did the calculations. Such mechanical relays were used in Zuse’s machines of the 1930s.
Come the 1950s, and the vacuum tubes took over. The Atanasoff-Berry Computer (ABC) used vacuum tubes as its switching units rather than relays. The switch from mechanical relay to vacuum tubes was an important technological advance as vacuum tubes could perform calculations considerably faster and more efficient than relay machines. However, this technological advance was short-lived because the tubes could not be made smaller than they were being made and had to be placed close to each other because they generated heat (Freiberger and Swaine, 1984). Then came the transistor which was acknowledged as a revolutionary development. In “Fire in the Valley”, the authors describe the transistor as a device which was the result of a series of developments in the applications of physics. The transistor changed the computer from a giant electronic brain to a commodity like a TV set. This innovation was awarded to three scientists: John Bardeen, Walter Brattain, and William Shockley. As a result of the technological breakthrough of transistors, the introduction of minicomputers of the 1960s and the personal computer revolution of the 1970s was made possible. However, researchers did not stop at transistors. They wanted a device that could perform more complex tasks—a device that could integrate a number of transistors into a more complex circuit. Hence, the terminology, integrated circuits or ICs. Because physically they were tiny chips of silicon, they came to be also referred to as chips. Initially, the demand for ICs was typically the military and aerospace
industries which were great users of computers and who were the only industries that could afford computers (Freiberger and Swaine, 1984). Later, Marcian “Ted” Hoff, an engineer at Intel, developed a sophisticated chip. This chip could extract data from its memory and interpret the data as an instruction. The term that evolved to describe such a device was “microprocessor”. Therefore, the term “microprocessor” first came into use at Intel in 1972 (Noyce, 1981). A microprocessor was nothing more than an extension of the arithmetic and logic IC chips corporating more functions into one chip (Freiberger and Swaine, 1984). Today, the term still refers to an LSI single-chip processor capable of carrying out many of the basic operations of a digital computer. Infact, the microprocessors of the late eighties and early nineties are full-sclae 32-bit and 32-bit address systems, operating at...
Please join StudyMode to read the full document