Information Technology – Term 1 (2011) test preparation notes: These notes will cover:
* Buffering and Spooling
* Caching and pipelining
* Virtual Memory
* Packet Switching
* TCP/IP Protocol overview
Object classes and their associated syntax.
A number of techniques have been designed to reduce the amount of accesses needed to slow devices so that the computer can process faster. Examples of slow devices would be hardware such as a printer (the slowest device of a computer as long ago when a document was printed, people could not use the computer for anything else (as all the processing power was dedicated to retrieving and fetching data to be printed) and the hard-drive (a slow device in general when compared to memory such as RAM (Random-Access Memory)). There are two common methods that were employed to alleviate these issues, they are namely: Buffering and SPOOLing. * Buffering: A buffer is a temporary storage area that is used to store data waiting for printing. This allows the CPU to carry on with its tasks (as not all the data must be sent to the CPU Cache) whilst the printer is printing. The buffer is probably in the form of either a storage area in the printer itself or an external buffer if the printer is rather old. * SPOOLing: SPOOLing stands for Simultaneous Peripheral Operation On-Line –SPOOL. Data that is required is stored in a spool file on the backing storage (hard drive) until the output device is available. Data is therefore only printed from this file when the CPU is free to deal with it. Thus the CPU is able to carry on with necessary tasks. Most computers implement this system via some sort of Spool Manager Software however most are oblivious to this. If SPOOLing or Buffering did not occur, people would have to wait lengthy periods of time to print data. Although these two methods are mainly used for printing, there are more methods used for data storage in general (such as on the Hard-Disk): * Caching: Caching is a concept used in most modern computers today and the fundamental idea of caching is to improve the speed of the computer by minimizing the number of accesses to slow devices by substituting faster devices (-Hard Disk, +RAM). The analogy used is: imagine one needed a book and this book is used often however it is school rules that you must put the book back. Why not just keep the book or loan it for longer so you can use it again if you are sure you will need it. The computer does something similar to this with data by finding out what data is most frequently used or was accessed last and storing that data into a faster device. Even if the prediction is only mildly successful there will still be a speed increase. There are various ways one can cache data: * Cache Store: Cache store is a small high speed memory device situated between the Processor and Main Store (Hard-Disk). It has a much faster access time than the Hard-Disk itself and is normally 20 to 100 times faster than the Hard-Disk in terms of Access Time. Data that is frequently used or was accessed last is stored in this Small memory device on the assumption that it will be needed again (the assumption is made that most of the time data and instructions that are situated close to each other will be needed by the computer (E.G.: I need a line out of a file, the whole file is stored in cache if possible.) When the next access to Main Store (Hard-Disk) is required, Cache Store is searched first for the relevant data. * Disk caching: Works exactly the same as Cache Store however this concept was extended to Disk Caching where a small area of RAM is reserved as cache. As before, if data is needed, the RAM Cache is checked first. * Caching via cookies on the WWW.: Caching has even extended into the WWW(World-Wide Web/Internet) where sites commonly use a system of storing images, user preferences, account details etc… into a text file on...
Please join StudyMode to read the full document