Since the inception of computing, the binary digit (bit) has been used to encode information at the lowest level. Why? … How efficient is this process, and are there any ways of optimizing it?
The bit is the most primitive data type of the computer. It is often represented with 1 or 0. In this discussion, we will explore reasons for choosing the bit as the lowest level encoding. We will also discuss how it can be efficiently stored, transferred and executed. Finally, we will examine ways to optimize binary instructions.
Computers are essentially electrical components. Electrical components can only hold two distinctive states. They are “On” and “Off”. In order to represent these states, the binary system was created. Capitalizing on the ground works of the binary system, logic gates (transistors) are built to change the states of these signals. By combining logic gates, we are able to build complex. The computer processor is one of those units that are made up of thousands of transistors.
Since processors are made of logic gates that understand binary, it is only natural to feed them with inputs that are in binary formats. In this way, no “translation” is need as they are speaking the same “language”. Hence processors perform operations most efficiently with binary data.
In terms of storage, binary data can be compressed using algorithms like Huffman code (Brookshear, 2011). The bit in this case, is an abstract layer. We can perform compression on any file type without the need to know what it contains. This effectively reduces storage space.
From the transport perspective, binary data is very portable. As long as a media supports 2 distinct states, it can be used to store or transfer binary data. Examples of such are electric voltage and magnetic polarization. Dues to its simplicity, many materials can be used to express the states. In 1990s, the TCP/IP protocol was built for binary data transfer. It packs binary data into packages and sends them...
Please join StudyMode to read the full document