Top-Rated Free Essay
Preview

Multiprogramming: Introduction to Memory Management

Good Essays
2963 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
Multiprogramming: Introduction to Memory Management
INTRODUCTION TO MEMORY MANAGEMENT

In multiprogramming, the CPU switches from one process to another. Therefore, processes must be inside the main memory at the same time. The memory manager must ensure that memory is shared efficiently and is free from errors. It must take care of the following issues in memory management.

1. It must ensure that the memory spaces of processes are protected so that unauthorized access is prevented.

2. It must ensure that each process has enough memory space to be able to execute. There are cases wherein a process requires a larger memory space than what is available. The memory manager should be able to swap processes in and out of the memory and hard disk to make sure that all processes have a chance to execute.

3. It must keep track of the memory locations used by each process. It should also know which part of the memory is free to use.

REVIEW OF RELATIVE ADDRESSES AND DYNAMIC RUN-TIME LOADING

Each memory location is assigned an absolute address or physical address. This is unique and it is the actual address of the memory location.

However, most processes use the relative address instead of specifying the actual physical address. The relative address is based on a reference point or base address. For example, a relative address is 500 memory locations from the start of the program. If the program is loaded at memory location 1500 (which is the base address), then the absolute address is 1500 + 500 = 2000.

Remember that in dynamic run-time loading, the absolute address is not generated when loaded, but only when it is needed by the CPU. During run-time, the CPU generates the needed relative address (also referred to as LOGICAL ADDRESS) of an instruction and converts it to a physical address. Because of this, relocating a program becomes possible even though it is executing.

Since the absolute address is computed during run-time, the problem is execution time will definitely slow down. This can be avoided using special hardware to speed-up the conversion process. This device is called the MEMORY MANAGEMENT UNIT (MMU). The figure below shows how the MMU converts a logical address into a physical address.

As shown in the figure, the MMU adds the base register and logical address to generate the physical address before sending it to the memory. To be able to move a process to another part of the memory, a new base register (also called RELOCATION REGISTER) must be loaded.

BASIC MAIN MEMORY ALLOCATION STRATEGIES

In this part of the topic, two basic main memory allocation strategies used by the operating system will be discussed: FIXED-PARTITION STRATEGIES and VARIABLE-PARTITION STRATEGIES.

FIXED-PARTITION MEMORY ALLOCATION STRATEGIES

In fixed partitioning, the main memory is divided into a fixed number of regions or partitions. The figure below shows an example.

As shown in the figure, the 32-MB main memory is divided into 8 fixed, equal-sized partitions of 4 MB each. These partitions are fixed. If the operating system assigned a process into a partition, any unused portion of the said partition cannot be allocated to other processes. Here are some of the problems that may arise with this kind of allocation strategy:

a. If the size of partition is larger than the size of the process, this leads to wastage of memory or what is called INTERNAL FRAGMENTATION. As previously mentioned, any unused portion of a partition allocated to a process cannot be allocated to other processes. For example, if a 1-MB process is loaded in the 4-MB partition, the internal fragmentation is 4 MB - 1 MB = 3 MB. 3 MB of main memory will be wasted.

b. If the size of partition is smaller than the size of the process, the process cannot be loaded. A solution is to redesign the program or use other programming techniques such as overlays. OVERLAYING is a technique wherein only those instructions or data that are currently needed are loaded into main memory.

An alternative is to divide the memory into fixed, unequal-sized partitions. An example is shown below.
As shown in the figure, the 32-MB main memory is still divided into 8 fixed, but unequal-sized partitions. This will somehow minimized the problems encountered with equal-sized partitions. For example, if a 1-MB process is loaded in the 2-MB partition, the internal fragmentation is 2 MB - 1 MB = 1 MB. Only 1 MB of main memory will be wasted.

However, there must be a strategy for selecting which partition will be allocated to a process. Obviously, it should be able to select the smallest partition that will fit an incoming process to minimize internal fragmentation. This strategy is called the BEST-FIT AVAILABLE STRATEGY. For example, the available partition sizes are 3 MB, 5 MB, and 6 MB. If an incoming process is 4 MB in size, it should be allocated to the 5-MB partition since it is the smallest partition that can accommodate the process.

EXTERNAL FRAGMENTATION is another problem. Consider the scenario that is illustrated in the figure below.

As shown in the figure, following the best-fit available strategy, P1 is allocated to the 3-MB partition. P2 is allocated to the 5-MB partition. However, P3 is cannot be loaded since it cannot fit in the last remaining partition. External fragmentation happens when the available partitions are not big enough to accommodate any waiting process.

The choice of partition sizes is another problem. If most processes are small for the partition, internal fragmentation will be high. If most processes are large for the partition, external fragmentation will be high. The size of partitions would greatly affect the amount of internal and external fragmentation of the main memory.

The degree of multiprogramming can also be affected in fixed-partition memory allocation. For example, a memory has 8 partitions and the first one is allocated to the operating system. There are 7 partitions left for other programs. This would mean that only a maximum of 7 processes can be multiprogrammed.

VARIABLE-PARTITION MEMORY ALLOCATION STRATEGIES

In variable-partitioning, also called DYNAMIC PARTITIONING, only the exact memory space needed by a process is allocated. There is no fixed number of partitions and therefore, no fixed limit on the degree of multiprogramming. These would all depend on the sizes of the processes and the size of the main memory.

To further understand variable partitioning, consider the following example.

A computer system has a 32-MB main memory with the operating system occupying the first 4 MB. The following processes are inside the job queue:

The initial state of the main memory is shown in the figure (a). The operating system occupies the first 4 MB locations and there are 28 MB free memory spaces left. A free memory space will be referred to as a HOLE.

P1 will be loaded and it can occupy the first 12 MB of the 28-MB hole. This would create a new 16-MB hole. This is shown in figure (b).

P2 will be loaded and it can occupy the first 7 MB of the 16-MB hole. This would create a new 9-MB hole. This is shown in figure (c).

P3 will be loaded and it can occupy the first 8 MB of the 9-MB hole. This would create a new 1-MB hole. However, P4 cannot be allocated since its size is larger than the available hole. This is shown in figure (d).

P2 would have finished its execution after 8 time units. The memory space will be de-allocated by the operating system. This would create two new holes. This is shown in the left figure.

P4 will now be loaded and it can occupy the first 5 MB of the 7-MB hole. This would create two new holes: 2 MB and 1 MB. This is shown in the right figure.

There is no internal fragmentation in variable partitioning since the operating system allocates the exact memory space needed by a process. However, as seen from the example, there is still external fragmentation.

The operating system maintains a list of holes in main memory to keep track of all the free memory spaces.

PLACEMENT STRATEGIES

In variable partitioning, the strategies used by the operating system in deciding which hole a process will be placed are:

1. FIRST-FIT STRATEGY

The operating searches from the beginning of the main memory. The first hole encountered that is large enough for the incoming process will be selected. This is considered the fastest strategy since searching will end when a big enough hole is found.

2. BEST-FIT STRATEGY

The operating system searches the entire list of holes for the smallest hole that can accommodate the incoming process. A possible drawback of this approach is there might come a time when external fragmentation will occur. Too many small holes may be produced which will not be large enough to accommodate incoming processes.

3. WORST-FIT STRATEGY

The operating system searches the entire list of holes for the largest hole and this will be allocated to the incoming process. The remaining hole may be large enough to accommodate other incoming processes.

The figure below will be used to illustrate the placement strategies.

If first-fit strategy will be used, P4 will be placed in the 4-MB hole. If best-fit strategy will be used, P4 will be placed in the 3-MB hole. If worst-fit strategy will be used, P4 will be placed in the 6-MB hole.

The main problem with external fragmentation is that free space is not contiguous. As a solution, COMPACTION can be performed by the operating system on a regular basis. In compaction, processes are moved towards the beginning of the main memory. The holes are therefore grouped together, forming one large block of free memory. The figure below illustrates this.

From the figure, a 12-MB process cannot be assigned before compaction. However, it can now be placed after compaction.

One disadvantage of compaction is since it involves moving processes during run-time, it is only possible if dynamic run-time loading is used. It also uses a significant amount of CPU time especially if it involves moving hundreds of processes. From these reasons, compaction is rarely applied.

Another solution to external fragmentation is to allocate the non-contiguous memory space to a process. This memory management scheme is called PAGING.

PAGING

In paging, a process is allowed to occupy non-contiguous memory space. The processes are divided into equal-sized blocks called PAGES. The main memory is divided into equal-sized blocks called FRAMES. A block is equal to the size of the frame. Therefore, a page fits a frame. To further understand paging, consider the illustrated example below.

[pic]

As shown in the figure, notice that the memory allocation for P1 and P2 are loaded into any available frame of memory. The operating system maintains a FREE FRAME LIST in main memory to keep track of all the free frames for allocation.

The operating system also maintains a page table for each process in memory to keep track of where it has placed the pages. The figure below shows the page table of P1 and P2 from the illustrated example.

A page table is indexed by page number. From the given page table, page 0 of P1 can be found in frame 6 of the memory. Also, page 2 of P2 can be found in frame 5 of the memory.

LOGICAL ADDRESS TO PHYSICAL ADDRESS TRANSLATION

The CPU generates a logical address which is composed of two parts or fields. As shown in the figure below, the most significant part is the page number p and the least significant part is the offset d.

The PAGE NUMBER indicates what page the address refers to. The OFFSET indicates what word within the page is being addressed.

Assuming a logical address is given as

The word being accessed is word 253 of page 10.

To further illustrate, the next example will use binary numbers. Assume there is a process whose size is 1 KB (1,024 bytes) and whose page size is 64 bytes.

The minimum number of bits of its logical address is

Number of Bits = log 1 KB / log 2 = 10 bits

Now, determine exactly how the logical address is divided.

The number of pages of the process is

Number of Pages = Process Size / Page Size = 1 KB / 64 bytes = 16 pages

The number of bits for the page number field is

log 16 / log 2 = 4 bits

The number of bits for the offset field is

log 64/ log 2=6 bits

Therefore,

Assume that the CPU generates 1101010111 as the logical address. This means that 1101 of 13 represents the page number and 010111 or 23 represent the offset. The word being accessed is word 23 of page 13.

It was previously discussed that a logical address must be converted to physical address before being sent to the memory. Since pages are placed in frames, the physical address must contain a frame number.

Therefore, a physical address is also composed of two parts or fields. As shown in the figure below, the most significant part is the frame number f and the least significant part is the offset d.

The frame number identifies the frame where the page is located. The offset identities the location of the word within the frame. Since the page size is the same as the frame size, the offset field is the same as that of the logical address.

Continuing the previous example, assume that the size of the main memory is 64 KB.

The number of bits in the physical address is

Number of Bits = log 64 KB / log 2 = 16 bits

Now, determine exactly how the physical address is divided.

As previously given, the page size is 64 bytes. Therefore, the frame size is also 64 bytes. The number of frames in the main memory is

Number of Frames = Main Memory Size / Frame Size = 64 KB / 64 bytes = 1,024 frames

The number of bits needed to identify the frames is

log 1,024/ log 2 = 10 bits

The offset field is the same as that of the logical address which is 6 bits. Therefore,

[pic]

From the example, the 10-bit logical address is given as 1101010111. Assuming the page table below, determine the corresponding 16-bit physical address.

The page number 1101 (13) is used as the index in accessing the page table. From the given page table, the frame number is determined to be 0011110101 (245). The physical address is therefore,

To summarize, a logical address is converted to a physical address as follows:

1. Determine the page number of the logical address. The number of bits of the page number depends on the number of pages of the process.

2. Use the page number to index into the page table of the process. The page table maintains the frame numbers of all the pages of the process.

3. Add the offset field of the logical address to the frame number to form the physical address.

The figure below illustrates this procedure.

[pic]

In paging, the problem of external fragmentation is eliminated. There is still internal fragmentation but it only occurs at the last page of each process. Suppose there is a process whose size is 64,050 bytes and the page size is 64 bytes. The process will have a total of 64,050 / 64 = 1,000.8 ( 1,001 pages. The last page will only have 50 bytes in it.

The size of the page (or frame) is another issue in paging. If the size of the page is too small, the number of internal fragmentation will decrease. However, this will result to too many pages, thus increasing the size of the page table. Since page tables reside in main memory, they will occupy more memory space. Transfer time between the hard disk and main memory will also increase since there are too many pages. If the size of the page is too large, the page table will be smaller and page transfer will be faster. However, there will be an increase in internal fragmentation. In practice, page sizes range from 512 bytes to 16 MB depending on the computer architecture.

PAGE TABLE IMPLEMENTATION

The translation from logical address to physical address should be fast, otherwise system throughput will decrease. Page tables must be stored in a place where it can be accessed more quickly. The following options can be used:

1. DEDICATED REGISTERS

Page tables can be stored in high-speed dedicated registers. This will result to very fast address translation. However, this could be expensive because in the real world, page tables have very large contents.

2. MAIN MEMORY

A Page tables can also be stored in main memory. The MMU maintains a page-table base register (PTBR) which points to memory locations of page tables. However, this option is very slow because there will always be two main memory accesses: accessing the page table for the frame number and then fetching the actual data or instruction.

3. CACHE MEMORY

The more popular option is storing page tables in translation look-aside buffer (TLB). This small but fast cache memory is used to store the most recently used page table entries. The page tables are still stored in main memory. However; the most recently used entries are copied in the TLB for quick translations in the future.

-----------------------
MAIN MEMORY IS DIVIDED INTO FIXED, EQUAL-SIZED PARTITIONS

EXAMPLE FOR FIRST-FIT, BEST-FIT, AND WORST-FIT STRATEGIES

MAIN MEMORY IS DIVIDED INTO FIXED, UNEQUAL-SIZED PARTITIONS

You May Also Find These Documents Helpful

  • Good Essays

    Some OS routines directly support application programs as they run and thus must be resident. Other transient routines are stored on disk and read into memory only when needed. Fixed-length partitions can also be used to allocate the set amount of memory that a particular program needs to run. Under dynamic memory management, the transient area is treated as a pool of unstructured free space. When the system decides to load a particular program, a region of memory just sufficient to hold the program is allocated from the pool. Using segmentation, programs are divided into independently addressed segments and stored in noncontiguous memory. Paging breaks a program into fixed-length pages.…

    • 7085 Words
    • 29 Pages
    Good Essays
  • Satisfactory Essays

    SD1230 Lab 1

    • 239 Words
    • 2 Pages

    1. Why is virtual memory addresses used for applications? – So it can have its own address space on the memory.…

    • 239 Words
    • 2 Pages
    Satisfactory Essays
  • Powerful Essays

    Nt1330 Unit 3 Assignment 1

    • 2019 Words
    • 9 Pages

    • Your computer performs a bitwise logical AND operation between the address and…

    • 2019 Words
    • 9 Pages
    Powerful Essays
  • Good Essays

    A. the CPU tells the RAM which address holds the data that the CPU wants to read…

    • 957 Words
    • 4 Pages
    Good Essays
  • Good Essays

    Nt1210 Chapter 1 Review

    • 1315 Words
    • 6 Pages

    4. Which of the following are true about random-access memory (RAM) as it is normally used inside a personal computer?…

    • 1315 Words
    • 6 Pages
    Good Essays
  • Satisfactory Essays

    CHAPTER3 REVEIW

    • 527 Words
    • 3 Pages

    a. The CPU tells the RAM which address holds the data that the CPU wants to read…

    • 527 Words
    • 3 Pages
    Satisfactory Essays
  • Good Essays

    4. Which of the following answers are true about random-access memory (RAM) as it is normally…

    • 856 Words
    • 4 Pages
    Good Essays
  • Satisfactory Essays

    Lab6 7 8 9

    • 425 Words
    • 2 Pages

    Is what is used to represent in the form of a binary number for the address bus.…

    • 425 Words
    • 2 Pages
    Satisfactory Essays
  • Satisfactory Essays

    POS355 Week 1 Individual

    • 574 Words
    • 2 Pages

    There are several items that are pertinent to memory management such as, basic hardware, the binding of symbolic memory addresses to definite physical addresses and the difference between logical and physical addresses. The most important task that memory management executes is the distribution and collection of memory…

    • 574 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    The operating systems job is managing the memory. The operating system is in charge of bringing this process into main memory (Stallings, 2012). However the processor must deal with references within the program. Branch instructions include an address to reference the order to be executed next. Data reference instructions include the address of the byte or word of data referenced. The processor hardware and operating system software must be able to translate the memory references found in the code of the program into actual physical memory addresses, reflecting the current location of the program in main memory. (Stallings, 2012).…

    • 573 Words
    • 3 Pages
    Good Essays
  • Satisfactory Essays

    Faith Integration

    • 613 Words
    • 3 Pages

    The processor could keep track of what locations are associated with each process and limit access to locations that are outside of a program's extent. By using base and limits registers and by performing a check for every memory access, information regarding the extent of a program's memory could be maintained…

    • 613 Words
    • 3 Pages
    Satisfactory Essays
  • Good Essays

    In a system without virtual memory, the effective address is a virtual address or a register.…

    • 1779 Words
    • 8 Pages
    Good Essays
  • Good Essays

    Ac from Store

    • 743 Words
    • 3 Pages

    3.4 Consider a hypothetical microprocessor generating a 16-bit address (for example, assume that the program counter and the address registers are 16-bits wide) and having a 16-bit data bus.…

    • 743 Words
    • 3 Pages
    Good Essays
  • Good Essays

    lru algorithm report

    • 842 Words
    • 3 Pages

    Now, first major problem mentioned in Section 9.4.1 will be discussed. How do we allocate the fixed amount of free memory among the various processes?…

    • 842 Words
    • 3 Pages
    Good Essays
  • Good Essays

    C Program

    • 808 Words
    • 4 Pages

    Whenever we talk of variables or data types the various concepts that are immediately associated with it are:…

    • 808 Words
    • 4 Pages
    Good Essays