1. Name the three ways in which signed integers can be represented in digital computers and explain the differences. 2. Which one of the three integer representations is used most often by digital computer systems? 3. How are complement systems like the odometer on a bicycle? 4. Do you think that double-dabble is an easier method than the other binary-to-decimal conversion methods explained in this chapter? Why? 5. With reference to the previous question, what are the drawbacks of the other two conversion methods? 6. What is overflow and how can it be detected? How does overflow in unsigned numbers differ from overflow in signed numbers? 7. If a computer is capable only of manipulating and storing integers, what difficulties present themselves? How are these difficulties overcome? 8. What are the three component parts of a floating-point number? 9. What is a biased exponent, and what efficiencies can it provide? 10. What is normalization and why is it necessary?
11. Why is there always some degree of error in floating-point arithmetic when performed by a binary digital computer? 12. How many bits long is a double-precision number under the IEEE-754 floating-point standard? 13. What is EBCDIC, and how is it related to BCD?
14. What is ASCII and how did it originate?
15. How many bits does a Unicode character require?
16. Why was Unicode created?
17. Why is non-return-to-zero coding avoided as a method for writing data to a magnetic disk? 18. Why is Manchester coding not a good choice for writing data to a magnetic disk? 19. Explain how run-length-limited encoding works.
20. How do cyclic redundancy checks work?
21. What is systematic error detection?
22. What is a Hamming code?
23. What is meant by Hamming distance and why is it important? What is meant by minimum Hamming distance? 24. How is the number of redundant bits necessary for code related to the number of data bits? 25. What...
Please join StudyMode to read the full document