Rudy Senjaya CSCI 610 Professor: Dr. Schubert Extra Credit 3 Memory Hierarchy CPU speed increases at a much faster rate than memory speed increase. Main Memory speed is much slower than CPU speed. Memory hierarchy is a solution for large and fast memory. Memory hierarchy design is based on these important principles: 1. Make the Common Case Fast In making a design trade-off, favor the frequent case over the infrequent case. This principle also applies when determining how to spend resources, since the impact on making some occurrence faster is higher if the occurrence is frequent. Amdahl's Law can be used to quantify this principle. 2. Principle of Locality Temporal locality: recently accessed items are likely to be accessed in the near future. Spatial locality: items whose addresses are near one another tend to be referenced close together in time. This is a diagram for a memory hierarchy:
As we move higher (closer to CPU) in the hierarchy, each level is faster, smaller and more expensive than its previous level. Within a memory hierarchy, we should try to keep recently accessed items in the fastest memory. Since the smaller memories are more expensive and faster, we want to use smaller memories to hold the most recently accessed items close to the CPU.
Memory speed influences CPU performance by the following formula: CPU execution time = (CPU clock cycles + memory stall cycles) * clock cycle Memory stall cycles = instruction count * memory references per instruction * miss rate * miss penalty If an instruction or data is not in registers, we must fetch it from cache, and if it is not in cache, we must fetch it from memory with miss penalty. If the miss penalty is large, it will affect CPU performance. CPU registers and cache are made from Static RAM (SRAM), and main memory is made from Dynamic RAM (DRAM). Three types of caches in memory block placement: 1. Direct mapped cache A memory block has only one place where it can be placed in cache. The mapping...
Please join StudyMode to read the full document