1. When predicting memory dependencies, what is the cost of "over predicting" (falsely predicting dependence)? What is the cost of "under predicting" (failing to predict an actual dependence)? Ans : While predicting memory dependencies, the cost of "over predicting" (falsely predicting a dependence) will be a delay in loading an instruction. Over predicting might create a false dependency on the last store instruction and thus generating confusion with the scheduler. The cost of "under predicting" will be a wrong loading of the next instruction. This might be due to not detecting the memory dependence between the instructions.
2. How were the simulation results for the "perfect" memory dependence predictor generated? Why can't the hardware just use the same approach, thus achieving perfect memory dependence prediction?
Ans : The "perfect " memory dependence predictor was generated by assumption. It was assumed that there was no false dependency and no memory violation penalty. This is certainly not possible in the hardware, as it needs to know the correct address for address resolving.
3. Briefly describe the store sets approach to memory dependence prediction (just a paragraph)
Ans: The use of store sets is based on two facts : The first is that the behavior of memory-order violations is a good predictor of future memory dependencies. The second is that it is important to predict dependencies of loads where one load is dependent on multiple stores or multiple loads depend on the same store. The store sets allow a load to be dependent on multiple stores. The store set consists of two tables : The first is a PC indexed table called the Store Set Identifier Table (SSIT) that maintains the store sets using a common tag for each load and the stores in its store set. The second is called the Last Fetched Store Table (LFST) and maintains dynamic information about the most recently fetched store for each store set. The information in this table is...
Please join StudyMode to read the full document