Page Translation
Let’s Regroup
-
Fast mapping from any virtual byte to any physical byte.
-
Operating system cannot do this. Can hardware help?
What’s the Catch?
-
CAMs are limited in size. We cannot make them arbitrarily large.
-
Segments are too large and lead to internal fragmentation.
-
Mapping individual bytes would mean that the TLB would not be able to cache many entries and performance would suffer.
-
Is there a middle ground?
Pages
Modern solution is to choose a translation granularity that is small enough to limit internal fragmentation but large enough to allow the TLB to cache entries covering a significant amount of memory.
-
Also limits the size of kernel data structures associated with memory management.
Page Size
-
4K is a very common page size. 8K or larger pages are also sometimes used.
-
4K pages and a 128-entry TLB allow caching translations for 512 KB of memory.
-
You can think of pages as fixed size segments, so the bound is the same for each.
Page Translation
-
We refer to the portion of the virtual address that identifies the page as the virtual page number (VPN) and the remainder as the offset.
-
Virtual pages map to physical pages.
-
All addresses inside a single virtual page map to the same physical page.
-
Check: for 4K pages, split 32-bit address into virtual page number (top 20 bits) and offset (bottom 12 bits). Check if a virtual page to physical page translation exists for this page.
-
Translate: Physical Address = Physical Page + offset.
TLB Management
-
The operating system loads them.
-
The TLB asks the operating system for help via a TLB exception. The operating system must either load the mapping or figure out what to do with the process. (Maybe boom.)
Paging: Pros
-
Maintains many of the pros of segmentation, which can be layered on top of paging.
-
Pro: can organize and protect regions of memory appropriately.
-
Pro: better fit for address spaces. Even less internal fragmentation than segmentation due to smaller allocation size.
-
Pro: no external fragmentation due to fixed allocation size!
Paging: Cons
-
Con: requires per-page hardware translation. Use hardware to help us.
-
Con: requires per-page operating system state. A lot of clever engineering here.
Page State
-
Store information about each virtual page.
-
Locate that information quickly.
Page Table Entries (PTEs)
We refer to a single entry storing information about a single virtual page used by a single process a page table entry (PTE).
-
(We will see in a few slides why we call them page table entries.)
-
Can usually jam everything into one 32-bit machine word:
-
Location: 20 bits. (Physical Page Number or location on disk.)
-
Permissions: 3 bits. (Read, Write, Execute.)
-
Valid: 1 bits. Is the page located in memory?
-
Referenced: 1 bits. Has the page been read/written to recently?
-
Locating Page State
-
Process: "Machine! Store to address
0x10000
!" -
MMU: "Where the heck is virtual address
0x10000
supposed to map to? Kernel…help!" -
(Exception.)
-
Kernel: Let’s see… where did I put that page table entry for
0x10000
… just give me a minute… I know it’s around here somewhere… I really should be more organized!