How Data and Programs Are Represented in the Computer

Topics: ASCII, Computer, Unicode Pages: 6 (1680 words) Published: April 27, 2012
How Data and Programs Are Represented in the Computer

by:

Rob Shepherd

CS300

Professor: Fred Kellenberger

Contents:

1.Introduction

2.The Parity Bit

3.Machine Language

4.How Computer Capacity is Expressed

5.The Processor, Main Memory, and Registers
a. The processor
b. Specialized Processor Chips
c. CISC, RISC, and MPP
d. Main Memory

7.Registers

8.The Machine Cycle

9. References

Introduction:

This paper is going to take a look at what goes on inside our computers and explain what the components are and how they function. For most people, getting inside a computer is something they would not even think about. It's probably best that most people feel this way. However, for those of you with a thirst for knowledge and the desire to see how things work, this is what you are looking for.

Before we study the inner workings of the processor, we need to expand on an earlier discussion of data representation in the computer—how the processor “understands” data. We started with a simple fact: electricity can be either on or off.

Other kinds of technology also use this two-state on/off arrangement. An electrical circuit may be open or closed. The magnetic pulses on a disk or tape may be present or absent. Current may be high voltage or low voltage. A punched card or tape may have a hole or not have a hole. This two-state situation allows computers to use the binary system to represent data and programs.

The decimal system that we are accustomed to has 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9). By contrast, the binary system has only two digits: 0 and 1. (Bi- means “two.”) Thus, in the computer the 0 can be represented by the electrical current being off (or at low voltage) and the 1 by the current being on (or at high voltage). All data and programs that go into the computer are represented in terms of these numbers. For example, the letter H is a translation of the electronic signal 01001000, or off-on-off-off-on-off-off-off. When you press the key for H on the computer keyboard, the character is automatically converted into the series of electronic impulses that the computer recognizes.

All the amazing things that computers do are based on binary numbers made up of 0s and 1s. Fortunately, we don’t have to enter data into the computer using groupings of 0s and 1s. Rather, data is encoded, or arranged, by means of binary, or digital, coding schemes to represent letters, numbers, and special characters.

There are many coding schemes. Two common ones are EBCDIC and ASCII. Both use 7 or 8 bits to form each byte, providing up to 256 combinations with which to form letters, numbers, and special characters, such as math symbols and Greek letters. One newer coding scheme uses 16 bits, enabling it to represent 65,536 unique characters.

EBCDIC: Pronounced “eb-see-dick,” EBCDIC, which stands for Extended Binary Coded Decimal Interchange Code, is commonly used in IBM mainframes. EBCDIC is an 8-bit coding scheme, meaning that it can represent 256 characters. ASCII: Pronounced “as-key,” ASCII, which stands for American Standard Code for Information Interchange, is the most widely used binary code with non-IBM mainframes and microcomputers. Whereas standard ASCII originally used 7 bits for each character, limiting its character set to 128, the more common extended ASCII uses 8 bits. Unicode: Although ASCII can handle English and European languages well, it cannot handle all the characters of some other languages, such as Chinese and Japanese. Unicode, which was developed to deal with languages, uses 2 bytes (16 bits) for each character, instead of 1 byte (8 bits), enabling it to handle 65,536 character combinations rather than just 256. Although each Unicode character takes up twice as much memory space and disk space as each ASCII character, conversion to the Unicode standard...