Top-Rated Free Essay
Preview

. Introduction to Computer Organization and Computer Evolution

Powerful Essays
4466 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
. Introduction to Computer Organization and Computer Evolution
Chapter 1. Introduction to Computer Organization and Computer Evolution
I. Computer Organization and Computer Architecture
In describing computers, a distinction is often made between computer architecture and computer organization. Although it is difficult to give precise definitions for these terms, a consensus exists about the general areas covered by each.
Computer Architecture refers to those attributes of a system visible to a programmer or, put another way, those attributes that have a direct impact on the logical execution of a program. Examples of architectural attributes include the instruction set, the number of bits used to represent various data types (e.g., numbers, characters), I/O mechanisms, and techniques for addressing memory.
Computer Organization refers to the operational units and their interconnections that realize the architectural specifications. Examples of organizational attributes include those hardware details transparent to the programmer, such as control signals; interfaces between the computer and peripherals; and the memory technology used.
As an example, it is an architectural design issue whether a computer will have a multiply instruction. It is an organizational issue whether that instruction will implemented by a special multiply unit or by a mechanism that makes repeated use of the add unit of the system. The organizational decision may be based on the anticipated frequency of use of the multiply instruction, the relative speed of the two approaches, and the cost and physical size of a special multiply unit.
Historically, and still today, the distinction between architecture and organization has been an important one. Many computer manufacturers offer a family of computer models, all with the same architecture but with differences in organization. Consequently, the different models in the family have different price and performance characteristics. Furthermore, a particular architecture may span many years and encompass a number of different computer models, its organization changing with changing technology. A prominent example of both these phenomena is the IBM System/370 architecture. This architecture was first introduced in 1970 and included a number of models. The customer with modest requirements could buy a cheaper, slower model and, if demand increased, later upgrade to a more expensive, faster model without having to abandon software that had already been developed. These newer models retained the same architecture so that the customer’s software investment was protected. Remarkably, the System/370 architecture, with a few enhancements, has survived to this day as the architecture of IBM’s mainframe product line.

II. Structure and Function
A computer is a complex system; contemporary computers contain millions of elementary electronic components. The key is to recognize the hierarchical nature of most complex systems, including the computer. A hierarchical system is a set of interrelated subsystems, each of the latter, in turn, hierarchical in structure until we reach some lowest level of elementary subsystem.
The hierarchical nature of complex systems is essential to both their design and their description. The designer need only deal with a particular level of the system at a time. At each level, the system consists of a set of components and their interrelationships. The behaviour at each level depends only on a simplified, abstracted characterization of the system at the next lower level. At each level, the designer is concerned with structure and function:
• Structure: The way in which the components are interrelated
• Function: The operation of each individual component as part of the structure
The computer system will be described from the top down. We begin with the major components of a computer, describing their structure and function, and proceed to successively lower layers of the hierarchy.
Function
Both the structure and functioning of a computer are, in essence, simple. Figure 1.1 depicts the basic functions that a computer can perform. In general terms, there are only four:
• Data processing: The computer, of course, must be able to process data. The data may take a wide variety of forms, and the range of processing requirements is broad. However, we shall see that there are only a few fundamental methods or types of data processing.
• Data storage: It is also essential that a computer store data. Even if the computer is processing on the fly (i.e., data come in and get processed, and the results go out immediately), the computer must temporarily store at least those pieces of data that are being worked on at any given moment. Thus, there is at least a short-term data storage function. Equally important, the computer performs a long-term data storage function. Files of data are stored on the computer for subsequent retrieval and update.
• Data movement: The computer must be able to move data between itself and the outside world. The computer’s operating environment consists of devices that serve as either sources or destinations of data. When data are received from or delivered to a device that is directly connected to the computer, the process is known as input-output (I/O), and the device is referred to as a peripheral. When data are moved over longer distances, to or from a remote device, the process is known as data communications.
• Control: Finally there must be control of these three functions. Ultimately, this control is exercised by the individual(s) who provides the computer with instructions. Within the computer, a control unit manages the computer’s resources and orchestrates the performance of its functional parts in response to those instructions.

FIGURE 1.1 A FUNCTIONAL VIEW OF THE COMPUTER
At this general level of discussion, the number of possible operations that can be performed is few. Figure 1.2 depicts the four possible types of operations. The computer can function as a data movement device (Figure 1.2a), simply transferring data from one peripheral or communications line to another. It can also function as a data storage device (Figure 1.2b), with data transferred from the external environment to computer storage (read) and vice versa (write). The final two diagrams show operations involving data processing, on data either in storage (Figure 1.2c) or en route between storage and the external environment (Figure 1.2d). FIGURE 1.2 (A) FIGURE 1.2(B) FIGURE 1.2 (C) FIGURE 1.2 (D)
FIGURE 1.2 POSSIBLE COMPUTER OPERATIONS
Structure
Figure 1.3 is the simplest possible depiction of a computer. The computer interacts in some fashion with its external environment. In general, all of its linkages to the external environment can be classified as peripheral devices or communication lines. There are four main structural components (Figure 1.4):
• Central Processing Unit (CPU): Controls the operation of the computer and performs its data processing functions; often simple referred to as processor
• Main memory: Stores data
• I/O: Moves data between the computer and its external environment
• System interconnection: Some mechanism that provides for communication among CPU, main memory, and I/O

FIGURE 1.3 THE COMPUTER FIGURE 1.4 THE COMPUTER: TOP-LEVEL STRUCTURE
There may be one or more of each of the aforementioned components. Traditionally, there has been just a single CPU. In recent years, there has been increasing use of multiple processors in a single computer. The most interesting and in some ways the most complex component is the CPU; its structure is depicted in Figure 1.5. Its major structural components are:
• Control unit: Controls the operation of the CPU and hence the computer
• Arithmetic and logic unit (ALU): Performs the computer’s data processing functions
• Registers: Provides storage internal to the CPU
• CPU interconnection: Some mechanism that provides for communication among the control unit, ALU, and registers FIGURE 1.5 THE CENTRAL PROCESSING UNIT (CPU)
Finally, there are several approaches to the implementation of the control unit; one common approach is a microprogrammed implementation. In essence, a microprogrammed control unit operates by executing microinstructions that define the functionality of the control unit. The structure of the control unit can be depicted as in Figure 1.6. FIGURE 1.6 THE CONTROL UNIT
III. Importance of Computer Organization and Architecture
The computer lies at the heart of computing. Without it most of the computing disciplines today would be a branch of the theoretical mathematics. To be a professional in any field of computing today, one should not regard the computer as just a black box that executes programs by magic. All students of computing should acquire some understanding and appreciation of a computer system’s functional components, their characteristics, their performance, and their interactions. There are practical implications as well. Students need to understand computer architecture in order to structure a program so that it runs more efficiently on a real machine. In selecting a system to use, they should be able to understand the tradeoff among various components, such as CPU clock speed vs. memory size. [Reported by the Joint Task Force on Computing Curricula of the IEEE (Institute of Electrical and Electronics Engineers) Computer Society and ACM (Association for Computing Machinery)].
IV. Computer Evolution
A brief history of computers is interesting and also serves the purpose of providing an overview of computer structure and function. A consideration of the need for balanced utilization of computer resources provides a context that is useful.
The First Generation: Vacuum Tubes
ENIAC: The ENIAC (Electronic Numerical Integrator And Computer), designed by and constructed under the supervision of John Mauchly and John Presper Eckert at the University of Pennsylvania, was the world’s first general-purpose electronic digital computer.
The project was a response to U.S. wartime needs during World War II. The Army’s Ballistics Research Laboratory (BRL), an agency responsible for developing range and trajectory tables for new weapons, was having difficulty supplying these tables accurately and within a reasonable time frame.
Mauchly, a professor of electrical engineering at the University of Pennsylvania, and Eckert, one of his graduate students, proposed to build a general-purpose computer using vacuum tubes for the BRL’s application. In 1943, the Army accepted this proposal, and work began on the ENIAC. The resulting machine was enormous, weighing 30 tons, occupying 1500 squre feet of floor space and containing more than 18,000 vacuum tubes. When operating, it consumed 140 kilowatts of power. It was also substantially faster than any electromechanical computer, being capable of 5000 additions per second.
The ENIAC was a decimal rather than a binary machine. That is, numbers were represented in decimal form and arithmetic was performed in the decimal system. Its memory consisted of 20 “accumulators,” each capable of holding a 10-digit decimal number. A ring of 10 vacuum tubes represented each digit. At any time, only one vacuum tube was in the ON state, representing one of the 10 digits. The major drawback of the ENIAC was that it had to be programmed manually by setting switches and plugging and unplugging cables.
The ENIAC was completed in 1946, too late to be used in the war effort. Instead, its first task was to perform a series of complex calculations that were used to help determine the feasibility of the hydrogen bomb. The use of the ENIAC for a purpose other than that for which it was built demonstrated its general-purpose nature. The ENIAC continued to operate under BRL management until 1955, when it was disassembled.
The von Neumann Machine: The task of entering and altering programs for the ENIAC was extremely tedious. The programming process could be facilitated if the program could be represented in a form suitable for storing in memory alongside the data. Then, a computer could get its instructions by reading them from memory, and a program could be set or altered by setting the values of a portion of memory.
This idea, known as the stored-program concept, is usually attributed to the ENIAC designers, most notably the mathematician John von Neumann, who was a consultant on the ENIAC project. Alan Turing developed the idea at about the same time. The first publication of the idea was in a 1945 proposal by von Neumann for a new computer, the EDVAC (Electronic Discrete Variable Automatic Computer).
In 1946, von Neumann and his colleagues began the design of a new stored-program computer, referred to as the IAS computer, at the Princeton Institute for Advanced Studies. The IAS computer, although not completed until 1952, is the prototype of all subsequent general-purpose computers. Figure 1.7 shows the general structure of the IAS computer. It consists of:
• A main memory, which stores both data and instructions
• An arithmetic and logic unit (ALU) capable of operating on binary data
• A control unit, which interprets the instructions in memory and causes them to be executed
• Input and output (I/O) equipment operated by the control unit FIGURE 1.7 STRUCTURE OF THE IAS COMPUTER
Commercial Computers
The 1950s saw the birth of the computer industry with two companies, Sperry and IBM, dominating the marketplace.
UNIVAC I: In 1947, Eckert and Mauchly formed the Eckert-Mauchly Computer Corporation to manufacture computers commercially. Their first successful machine was the UNIVAC I (Universal Automatic Computer), which was commissioned by the Bureau of the Census for the 1950 calculations. The Eckert-Mauchly Computer Corporation became part of the UNIVAC division of Sperry-Rand Corporation, which went on to build a series of successor machines.
The UNIVAC I was the first successful commercial computer. It was intended, as the name implies, for both scientific and commercial applications. The first paper describing the system listed matrix algebraic computations, statistical problems, premium billings for a life insurance company, and logistical problems as a sample of the tasks it could perform.
UNIVAC II: The UNIVAC II which had greater memory capacity and higher performance than the UNIVAC I, was delivered in the late 1950s and illustrates several trends that have remained characteristic of the computer industry. First, advances in technology allow companies to continue to build larger, more powerful computers. Second, each company tries to make its new machines upward compatible with the older machines. This means that the programs written for the older machines can be executed on the new machine. This strategy is adopted in the hopes of retaining the customer base; that is, when a customer decides to buy a newer machine, he or she is likely to get it from the same company to avoid losing the investment in programs.
The UNIVAC division also began development of the 1100 series of computers, which was to be its major source of revenue. This series illustrates a distinction that existed at one time. In 1955, IBM, which stands for International Business Machines, introduced the companion 702 product, which had a number of hardware features that suited it to business applications. These were the first of a long series of 700/7000 computers that established IBM as the overwhelmingly dominant computer manufacturer.
The Second Generation: Transistors
The first major change in the electronic computer came with the replacement of the vacuum tube by the transistor. The transistor is smaller, cheaper, and dissipates less heat than a vacuum tube but can be used in the same way as a vacuum tube to construct computers. Unlike the vacuum tube, which requires wires, metal plates, a glass capsule, and a vacuum, the transistor is a solid-state device, made from silicon.
The transistor was invented at Bell Labs in 1947 and by the 1950s had launched an electronic revolution. The National Cash Registers (NCR) and, more successfully, Radio Corporation of America (RCA) were the front-runners with some small transistor machines. IBM followed shortly with the 7000 series.
The second generation is noteworthy also for the appearance of the Digital Equipment Corporation (DEC). DEC was founded in 1957 and, in that year, delivered its first computer, the PDP-1 (Programmed Data Processor). This computer and this company began the minicomputer phenomenon that would become so prominent in the third generation.
The IBM 7094: From the introduction of the 700 series in 1952 to the introduction of the last member of the 7000 series in 1964, this IBM product line underwent an evolution that is typical of computer products. Successive members of the product line show increased performance, increased capacity, and/or lower cost. Table 1.1 illustrates this trend.

The Third Generation: Integrated Circuit
A single, self-contained transistor is called a discrete component. Throughout the 1950s and early 1960s, electronic equipment was composed largely of discrete components–transistors, resistors, capacitors, and so on. Discrete components were manufactured separately, packaged in their own containers, and soldered or wired together onto masonite-like circuit boards, which were then installed in computers, oscilloscopes, and other electronic equipment. Early second-generation computer contained about 10,000 transistors. This figure grew to the hundreds of thousands, making the manufacture of newer, more powerful machines increasingly difficult.
In 1958 came the achievement that revolutionized electronics and started the era of microelectronics: the invention of the integrated circuit.
Microelectronics: Microelectronics means, literally, “small electronics.” Since the beginnings of digital electronics and the computer industry, there has been a persistent and consistent trend toward the reduction in size of digital electronic circuits. The basic elements of a digital computer, as we know, must perform storage, movement, processing, and control functions. Only two fundamental types of components are required: gates and memory cells. A gate is a device that implements a simple Boolean or logical function. Such devices are called gates because they control data flow in much the same way that canal gates do. The memory cell is a device that can store one bit of data; that is, the device can be in one of two stable states at any time. By interconnecting large numbers of these fundamental devices, we can construct a computer. We can relate this to our four basic functions as follows:
• Data storage: Provided by memory cells.
• Data processing: Provided by gates.
• Data movement: The paths between components are used to move data from memory to memory and from memory through gates to memory.
• Control: The paths between components can carry control signals. When the control signal is ON, the gate performs its function on the data inputs and produces a data output. Similarly, the memory cell will store the bit that is on its input lead when the WRITE control signal is ON and will place the bit that is in the cell on its output lead when the READ control signal is ON.
Thus, a computer consists of gates, memory cells, and interconnections among these elements. The integrated circuit exploits the fact that such components as transistors, resistors, and conductors can be fabricated from a semiconductor such as silicon. It is merely an extension of the solid-state art to fabricate an entire circuit in a tiny piece of silicon rather than assemble discrete components made from separate pieces of silicon into the same circuit. Many transistors can be produced at the same time on a single wafer of silicon. Equally important, these transistors can be connected with a process of metallization to form circuits.
Figure 1.8 depicts the key concepts in an integrated circuit. A thin wafer of silicon is divided into a matrix of small areas, each a few millimetres square. The identical circuit pattern is fabricated in each area, and the wafer is broken up into chips. Each chip consists of many gates and/or memory cells plus a number of input and output attachment points. This chip is then packaged in housing that protects it and provides pins for attachment to devices beyond the chip. A number of these packages can then be interconnected on a printed circuit board to produce larger and more complex circuits.

As time went on, it became possible to pack more and more components on the same chip. This growth in density is illustrated in Figure 1.9; it is one of the most remarkable technological trends ever recorded. This figure reflects the famous Moore’s law, which was propounded by Gordon Moore, cofounder of Intel, in 1965. Moore observed that the number of transistors that could be put on a single chip was doubling every year and correctly predicted that this pace would continue into the near future. FIGURE 1.9 GROWTH IN CPU TRANSISTOR COUNT
The consequences of Moore’s law are profound:
1. The cost of a chip has remained virtually unchanged during this period of rapid growth in density. This means that the cost of computer logic and memory circuitry has fallen at a dramatic rate.
2. Because logic and memory elements are placed closer together on more densely packed chips, the electrical path length is shortened, increasing operating speed.
3. The computer becomes smaller, making it more convenient to place in a variety of environments.
4. There is a reduction in power and cooling requirements.
5. The interconnections on the integrated circuit are much more reliable than solder connections. With more circuitry on each chip, there are fewer interchip connections.
IBM System/360: By 1964, IBM had a firm grip on the computer market with its 7000 series of machines. In that year, IBM announced the System/360, a new family of computer products. Although the announcement itself was no surprise, it contained some unpleasant news for current IBM customers: the 360 product line was incompatible with older IBM machines. Thus, the transition to the 360 would be difficult for the current customer base. This was a bold step by IBM, but one IBM felt was necessary to break out of some of the constraints of the 7000 architecture and to produce a system capable of evolving with the new integrated circuit technology. The 360 was the success of the decade and cemented IBM as the overwhelmingly dominant computer vendor, with a market share above 70%.
The System/360 was the industry’s first planned family of computers. The family covered a wide range of performance and cost. Table 1.2 indicates some of the key characteristics of the various models in 1965.
TABLE 1.2 KEY CHARACTERISTICS OF THE SYSTEM/360 FAMILY
Characteristic Model 30 Model 40 Model 50 Model 65 Model 75
Maximum memory size (bytes) 64K 256K 256K 512K 512K
Data rate from memory (Mbytes/sec) 0.5 0.8 2.0 8.0 16.0
Processor cycle time (μs) 1.0 0.625 0.5 0.25 0.2
Relative speed 1 3.5 10 21 50
Maximum number of data channels 3 3 4 6 6
Maximum data rate on one channel (Kbytes/s) 250 400 800 1250 1250

The concept of a family of compatible computers was both novel and extremely successful. The characteristics of a family are as follows:
• Similar or identical instruction set: The program that executes on one machine will also execute on any other.
• Similar or identical operating system: The same basic operating system is available for all family members.
• Increasing speed: the rate of instruction execution increases in going from lower to higher family members.
• Increasing number of I/O ports: In going from lower to higher family members.
• Increasing memory size: In going from lower to higher family members.
• Increasing cost: In going from lower to higher family members.
DEC PDP-8: Another momentous first shipment occurred: PDP-8 from DEC. At a time when the average computer required an air-conditioned room, the PDP-8 (dubbed a minicomputer by the industry) was small enough that it could be placed on top of a lab bench or be built into other equipment. It could not do everything the mainframe could, but at $16,000, it was cheap enough for each lab technician to have one.
The low cost and small size of the PDP-8 enabled another manufacturer to purchase a PDP-8 and integrate it into a total system for resale. These other manufacturers came to be known as original equipment manufacturers (OEMs), and the OEM market became and remains a major segment of the computer marketplace. As DEC’s official history puts it, the PDP-8 “established the concept of minicomputers, leading the way to a multibillion dollar industry.”
Later Generations
Beyond the third generation there is less general agreement on defining generations of computers. Table 1.3 suggests that there have been a number of later generations, based on advances in integrated circuit technology.
Generation Approximate Dates Technology Typical Speed (operations per second)
1 1946-1957 Vacuum tube 40,000
2 1958-1964 Transistor 200,000
3 1965-1971 Small and medium scale integration 1,000,000
4 1972-1977 Large scale integration 10,000,000
5 1978-1991 Very large scale integration 100,000,000
6 1991- Ultra large scale integration 1,000,000,000

With the rapid pace of technology, the high rate of introduction of new products and the importance of software and communications as well as hardware, the classification by generation becomes less clear and less meaningful. In this section, we mention two of the most important of these results.
Semiconductor Memory: The first application of integrated circuit technology to computers was construction of the processor (the control unit and the arithmetic and logic unit) out of integrated circuit chips. But it was also found that this same technology could be used to construct memories.
In the 1950s and 1960s, most computer memory was constructed from tiny rings of ferromagnetic material, each about a sixteenth of an inch in diameter. These rings were strung up on grids of fine wires suspended on small screens inside the computer. Magnetized one way, a ring (called a core) represented a one; magnetized the other way, it stood for a zero. It was expensive, bulky, and used destructive readout.
Then, in 1970, Fairchild produced the first relatively capacious semiconductor memory. This chip, about the size of a single core, could hold 256 bits of memory. It was non-destructive and much faster than core. It took only 70 billionths of a second to read a bit. However, the cost per bit was higher than for that of core.
In 1974, a seminal event occurred: The price per bit of semiconductor memory dropped below the price per bit of core memory. Following this, there has been a continuing and rapid decline in memory cost accompanied by a corresponding increase in physical memory density. Since 1970, semiconductor memory has been through 11 generations: 1K, 4K, 16K, 64K, 256K, 1M, 4M, 16M, 64M, 256M, and, as of this writing, 1G bits on a single chip. Each generation has provided four times the storage density of the previous generation, accompanied by declining cost per bit and declining access time.
Microprocessors: Just as the density of elements on memory chips has continued to rise, so has the density of elements on processor chips. As time went on, more and more elements were placed on each chip, so that fewer and fewer chips were needed to construct a single computer processor.
A breakthrough was achieved in 1971, when Intel developed its 4004. The 4004 was the first chip to contain all of the components of a CPU on a single chip: the microprocessor was born.
The 4004 can add two 4-bit numbers and can multiply only be repeated addition. By today’s standards, the 4004 is hopelessly primitive, but it marked the beginning of a continuing evolution of microprocessor capability and power.

You May Also Find These Documents Helpful

  • Good Essays

    | organized collections of computer data and instructions, its often broken into two major categories, system software and Application software…

    • 429 Words
    • 2 Pages
    Good Essays
  • Satisfactory Essays

    Search

    • 598 Words
    • 3 Pages

    Computer system development can be thought of as having two main components: system analysis and system design. In system analysis, emphasis is placed on understanding the details of an existing system, and one then proposes to design a new system based on the analysis findings. From a project perspective, the objectives are to determine the needs of a business, engineering or science project that requires a computer system and prepare a comprehensive design around these project needs.…

    • 598 Words
    • 3 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Solutions manual to Auditing and Assurance Services 12 th by: Alvin A Arens, Randal J Elder,…

    • 1175 Words
    • 5 Pages
    Satisfactory Essays
  • Good Essays

    The basic components of all computer systems whether they are microcomputers or supercomputer all contain six basic building blocks: input, output, memory, arithmetic/logical unit, control unit and files. A computer system needs input devices for entering data into the computer for processing. The most common input devices are keyboards and mice. Input devices have now expanded to other methods such as voice input, magnetic ink character recognition (MICR), Optical character recognition (OCR), scanning bar code label, etc. The output devices are used to produce the results of the processing done by the computer. The most common output devices include computer screen or monitor, printer, or writing output on CD or DVD. Computer memory or main memory holds the program instructions and data. All data flows are to and from memory. Arithmetic and Logic Unit consist of incredibly small integrated circuits on a silicon chip. It is mainly responsible for arithmetic function of addition, subtraction, multiplication and division calculation, logical comparison and decision. Computer files storage devices sometimes called secondary or backing storage is used to store programs and data when they are not being used. Secondary storage is nonvolatile. File devices include hard disk drive, magnetic tape drives, flash drives, and CD or DVD. Control Unit provides the control that enables the computer to take advantage of the speed and capacity of its other components. It controls the sequence of instruction to be executed, controls the flow of data, interpret instructions, and it regulate timing of processor (Brown et al., 2012).…

    • 569 Words
    • 2 Pages
    Good Essays
  • Powerful Essays

    SIMD Architecture

    • 2761 Words
    • 12 Pages

    SIMD represents one of the earliest styles of parallel processing. The term SIMD stands for “Single-Instruction Multiple-Data,” The same instruction is executed by multiple processor using different data streams. Each processor has its own data memory, but there is single instruction memory and control processor, which fetches and dispatches instructions. SIMD aptly encapsulates the…

    • 2761 Words
    • 12 Pages
    Powerful Essays
  • Powerful Essays

    Software architecture plays an important role in the achievement of particular qualities we want to see in our software under development. On the other hand specific software demands some distinct qualities in software architecture. Now question arises how software architecture becomes the base to achieve the certain qualities and how qualities influence the architecture. Architecture provides the base for the development of the core product so by inspecting the architecture we can predict the desirable qualities. Here we come to the fact that how much we should deliberate the qualities while designing software architecture.…

    • 5188 Words
    • 21 Pages
    Powerful Essays
  • Good Essays

    lab1_32x32_registerfile

    • 1009 Words
    • 7 Pages

    hardwired to always output the value zero, regardless of what may or may not be written…

    • 1009 Words
    • 7 Pages
    Good Essays
  • Best Essays

    Experienced Life

    • 2778 Words
    • 12 Pages

    4.0 How has the structure of the personal computer industry changed over the last 20 years…

    • 2778 Words
    • 12 Pages
    Best Essays
  • Powerful Essays

    Os by William Stalling 6/E

    • 238147 Words
    • 953 Pages

    An operating system mediates among application programs, utilities, and users, on the one hand, and the computer system hardware on the other. To appreciate the functionality of the operating system and the design issues involved, one must have some appreciation for computer organization and architecture. Chapter 1 provides a brief survey of the processor, memory, and Input/Output (I/O) elements of a computer system.…

    • 238147 Words
    • 953 Pages
    Powerful Essays
  • Satisfactory Essays

    Operation Management

    • 256 Words
    • 2 Pages

    Nevertheless, in the statement it seems that structural decisions are more important than the infrastructural decisions, because hardware seems to be more important than software. However, the computer cannot operate without both, so both of them are very important parts of the…

    • 256 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    The task that we aim to accomplish with this report is to aptly compare and analyse the architectural thoroughness and efficiency of…

    • 1061 Words
    • 4 Pages
    Good Essays
  • Satisfactory Essays

    Voltmeter

    • 446 Words
    • 2 Pages

    When we have to learn about a new computer we have to familiarize about the machine capability we are using, and we can do it by studying the internal hardware design (devices architecture), and also to know about the size, number and the size of the registers.…

    • 446 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    References: 1. John L. Hennessey and David A. Patterson, “ Computer Architecture – A Quantitative…

    • 2118 Words
    • 9 Pages
    Good Essays
  • Powerful Essays

    The memory unit is an essential component in any digital computer since it is needed for storing programs and data. Main Memory The memory unit that communicates directly with the CPU is called the main memory. Auxiliary Memory Devices that provide backup storage are called auxiliary memory. The most common auxiliary memory devices used in computer systems are magnetic disks and tapes.…

    • 949 Words
    • 4 Pages
    Powerful Essays
  • Satisfactory Essays

    13.a) What is the difference between an algorithm and pseudocode? Also write an Algorithm and pseudocode for solving a quadratic equation. (16)…

    • 2874 Words
    • 12 Pages
    Satisfactory Essays