Paging
Locating Page State
-
Process: "Machine! Store to address
0x10000
!" -
MMU: "Where the heck is virtual address
0x10000
supposed to map to? Kernel…help!" -
(Exception.)
-
Kernel: Let’s see… where did I put that page table entry for
0x10000
… just give me a minute… I know it’s around here somewhere… I really should be more organized!
-
Speed: translation is a hot path and should be as efficient as possible.
-
Compactness: data structures we use should not take up too much physical memory.
Page Tables
The data structure used to quickly map a virtual page number to a page table entry is called a page table.
-
Each process has a separate page table.
-
Why?
Flat Page Tables
Approach: use one array to hold all page table entries for each process. Virtual page number is index into this array.
-
Approach: use one array to hold all page table entries for each process. Virtual page number is index into this array.
-
Speed: O(1). VPN is used directly as an index into the array.
-
Compactness: 4 MB per process that may have to be contiguous! Most is unused!
Linked List Page Tables
-
Approach: list of PTEs for each process, searched on each translation.
Approach: list of PTEs for each process, searched on each translation.
-
Speed: O(n) for n valid virtual pages.
-
Compactness: 4 bytes * n for n valid virtual pages. All entries are used!
Multi-Level Page Tables
Approach: build a tree-like data structure mapping VPN to PTE. Break VPN into multiple parts, each used as an index at a separate level of the tree.
-
Example: with 4K pages VPN is 20 bits. Use top 10 bits as index into top-level page table, bottom 10 bits as index into second-level page table.
-
Each page table is 210 * 4 bytes = 4K. Convenient!
-
Build a tree-like data structure mapping VPN to PTE. Break VPN into multiple parts, each used as an index at a separate level of the tree.
-
Speed: O(c). Constant number of look ups per translation depending on tree depth.
-
Compactness: Depends on sparsity of address space, but better than flat and worse than linked list.
Out of Core
So far we have been talking about cases where processes are able to use the physical memory available on the machine.
When we run out, there are two options:
-
Fail, and either don’t load the process (
exec()
), don’t create a new process (fork()
), refuse to allocate more heap (sbrk()
), or kill the process if it is trying to allocate more stack. -
Create more space, preserving the contents of memory for later use.
Virtually Sneaky
Virtual address translation gives the kernel the ability to remove memory from a process behind its back.
-
The last time the process used the virtual address, it behaved like memory.
-
The next time the process uses the virtual address, it behaves like memory.
-
In between, whatever data was stored at that address must be preserved.