Technical Women

041
Figure 1. Clarisse de Souza

Today: Introduction to Memory Management

  • Problems with direct physical allocation.

  • Goals of memory multiplexing.

  • The address space abstraction.

$ cat announce.txt

Scheduling Goals

(Or, how to evaluate schedulers.)

  • Meeting deadlines: how well does it meet deadlines—​unpredictable or predictable?

  • Resource allocation: how completely does it allocate system resources?

  • Performance: making great scheduling decisions is futile if the decision process itself takes forever.

On human-facing systems, deadlines (or interactivity) usually wins. Why?
  • Your time is more valuable than your computer’s.

The Know Nothings

Give two examples of schedulers that do not use any information about threads:
  • Random

  • Round Robin

The Know-It-Alls

Let’s say we can predict the future. What might we like to know about the thread we are about to execute?
  • How long is it going to use the CPU!

  • Will it block or yield?

  • How long will it wait?

Non-Oracular

  • Instead of predicting the future we use the past to predict the future.

  • One example of such a scheduler is multi-level feedback queues (MLFQ).

Linux Scheduling

  • Who is Ingo Molnar? The Linux scheduling subsystem maintainer and developer.

  • Who is Con Kolivas? An Australian anaesthetist and Linux kernel developer who focused on improving interactivity while striving for simplicity and predictability.

Rotating Staircase Deadline Scheduler

Assume we have a RSDL scheduler with a 5 ms quantum and 10 levels.

  • A thread that starts at the highest priority level 0 can potentially run in which levels? 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.

  • A thread that starts at priority level 5 can potentially run in which levels? 5, 6, 7, 8, and 9.

  • At the beginning of a time quantum we have one runnable thread at priority 0, 3, 7, and 9. What is the longest amount of time before the thread at level 9 has a chance to run? 15 ms = 5 ms + 5 ms + 5 ms.

Scheduling: Questions?

Professors favorite topic

Generates irrational enthusiasm

Space v. Time Multiplexing

Time multiplexing: sharing a resource by dividing up access to it over time.
  • Example: CPU scheduling on a single-core system.

  • Example: Room scheduling using temporal scheduling.

  • Example: Car share programs.

Space multiplexing: sharing a resource by dividing it into smaller pieces.
  • Example: CPU scheduling on a multi-core system. Low granularity.

  • Example: memory management. High granularity.

  • Example: splitting up a cake.

Clarification: Memory Allocation

Memory allocation happens in several steps:
  1. A process requests large chunks of memory from the OS kernel…​

  2. …​and then divides available memory up using a process-level allocation library (such as malloc).

  • You’re probably more familiar with the second step, but we’re going to focus on the first. (Although allocators are fascinating and still an area of active research…​)

Direct Physical Memory Multiplexing

  • Why not just divide physical memory between processes?

physical 1
physical 2
physical 3
physical 4
physical 5
physical 6
physical 7
physical 8

Direct Multiplexing: Problems

  • Limited to the amount of physical memory on the machine.

  • What happens if processes request memory that they do not use?

Direct Physical Memory Multiplexing

  • Why not just divide physical memory between processes?

physical 8
physical 9
physical 10
physical 11
physical 12

Direct Multiplexing: Problems

  • Limited to the amount of physical memory on the machine.

  • Potentially discontiguous allocations.

    • Complicates process memory layout.

  • Potential for fragmentation to reduce allocation efficiency.

Process Memory Layout

How do processes know where their code and data is located?

int data[128];
...
data[5] = 8; // Where the heck is data[5]?
...
result = foo(data[5]); // Where the heck is foo?

Fragmentation

Fragmentation: when a request for contiguous memory fails despite the fact that there is enough unused memory available (on the system).

  • Internal fragmentation: unused memory is inside existing allocations.

  • External fragmentation: unused memory is between existing allocations.

  • Note that it is not always feasible to split data structures across multiple pieces of discontiguous memory:

int data[10240]; // I had better be contiguous.

Direct Multiplexing: Problems

  • Limited to the amount of physical memory on the machine.

  • Potentially discontiguous allocations.

  • Potential for fragmentation to reduce allocation efficiency.

  • Can I enforce my allocations?

    • Not without checking every memory access. Way too slow, but hardware could help…​

  • Can I safely reclaim unused memory?

    • Leads to increased discontiguity and suffers from the same enforcement problem.

Memory Multiplexing Requirements

  • Grant: the kernel should be able to allocate memory to processes statically (at startup) and dynamically (as needed).

  • Enforce: the kernel should be able to enforce memory allocations efficiently.

  • Reclaim: the kernel should be able to repurpose unused memory without destroying its contents.

  • Revoke: the kernel should be able to stop a process from using memory that it was previously allocated.

Comparison to CPU

  • Grant: schedule a thread via a context switch.

  • Enforce: interrupt a thread using a timer interrupt.

  • Reclaim: this is new.

  • Revoke: deschedule a thread via a context switch.

Address Spaces: The Memory Management Abstraction

We provide every process with an identical view of memory that makes it appear:

  • plentiful, contiguous, uniform, and private.

addressspace 1
Figure 2. The Address Space
addressspace 2
Figure 3. The Address Space
addressspace 3
Figure 4. The Address Space
addressspace 4
Figure 5. The Address Space
addressspace 5
Figure 6. The Address Space

Address Spaces: Layout

The uniformity of address spaces simplifies process layout:

  • "I always put my code and static variables at 0x10000."

  • "My heap always starts at 0x20000000 and grows up."

  • "My stack always starts at 0xFFFFFFFF and grows down."

layout 1
layout 2
layout 3
layout 4

Convention

  • Process layout is specified by the Executable and Linker Format (ELF) file. (Remember ELF?)

  • Some layout is the function of convention.

  • Example: why not load the code at 0x0?

    • To catch possibly the most common programmer error: NULL pointer problems!

    • Leaving a large portion of the process address space starting at 0x0 empty allows the kernel to catch these errors, including offsets against NULL caused by NULL structures:

struct bar * foo = NULL;
foo->bar = 10;

Segmentation fault

Core dumped

Next Time

  • Address translation.

  • Levels of indirection.

  • Physical and virtual addresses.