Today
-
Threads.
-
Threading implementations: user v. kernel.
-
Thread states.
Review: Thread Transitions
A transition between two threads is called a context switch.
Review: CPU Limitations
-
There is only one, and
-
it is way faster than other parts of the system.
Review: Batch Scheduling
-
Running jobs sequentially until completion.
-
Slow devices can idle the CPU.
Review: The Illusion of Concurrency
-
By rapidly switching the CPU between multiple running tasks.
Review: Seize the Day
-
By using a periodic timer to generate hardware interrupts.
Review: The Illusion of Concurrency
-
Timer interrupts mean that a running thread may be stopped at any time.
-
When the thread restarts we want it to appear that nothing has happened.
Review: Saving Thread State
-
Registers
-
Stack
Review: Context Switch Overhead
-
Context switches are not free, require executing many instructions and saving fair amount of state.
-
Context switching creates the cost to enter the kernel.
-
Can’t go too fast otherwise overhead starts to dominate!
Questions: CPU Limitations, Concurrency, Context Switching?
Threads
-
Registers
-
Stack
-
Registers
-
Stack
-
Memory
-
File descriptor table.
Threads
Why Use Threads?
-
Threads can be a good way of thinking about applications that do multiple things "simultaneously."
-
Threads may naturally encapsulate some data about a certain thing that the application is doing.
-
Threads may help applications hide or parallelize delays caused by slow devices.
Threads v. Processes Part II
Good example from Wikipedia: multiple threads within a single process are like multiple cooks trying to prepare the same meal together.
-
Each one is doing one thing.
-
They are probably doing different things.
-
They all share the same recipe but may be looking at different parts of it.
-
They have private state but can communicate easily.
-
They must coordinate!
Meme
The OS corrupted
The cake
Aside: Threads v. Events
-
While threads are a reasonable way of thinking about concurrent programming, they are not the only way—or even always the best way—to make use of system resources.
-
Another approach is known as event-driven programming.
-
Anyone who has done JavaScript development or used frameworks like node.js has grown familiar with this programming model.
-
Threads can block, so we make use of the CPU by switching between threads!
-
Event handlers cannot block, so we can make use of the CPU by simply running events until completion.
Naturally Multithreaded Applications
-
Use a separate thread to handle each incoming request.
-
Separate threads for each open tab.
-
When loading a page, separate threads to request and receive each unique part of the page.
-
Divide-and-conquer "embarrassingly parallelizable" data sets.
Why Not Processes?
-
IPC is more difficult because the kernel tries to protect processes from each other.
-
Inside a single process, anything goes!
-
-
State (what?) associated with processes doesn’t scale well.
Implementing Threads
-
This is the M:1 threading model, M user threads that look like 1 thread to the operating system kernel.
-
This is the 1:1 threading model.
Implementing Threads in User Space
-
Doesn’t involve multiplexing between processes so no kernel privilege required!
-
Save and restore context? This is just saving and restoring registers. The C library has an implementation called
setjmp()
/longjmp()
. -
Preempt other threads? Use periodic signals delivered by the operating system to activate a user space thread scheduler.
Aside: setjmp()
/longjmp()
Wizardry
What will the following code do?
|
|
-
Use these tricks to impress your (new) friends!
-
(Or get rid of old ones…)
!
image:http://i2.kym-cdn.com/entries/icons/original/000/011/057/unimpressed.PNG
Nailed the longjmp
Forgot the setjmp
Comparing Threading Implementations
M:1 (user space) threading
-
Threading operations are much faster because they do not have to cross the user/kernel boundary.
-
Thread state can be smaller.
-
Can’t use multiple cores!
-
Operating system may not schedule the application correctly because it doesn’t know about the fact that it contains more than one thread.
-
A single thread may block the entire process in the kernel when there are other threads that could run.
Comparing Threading Implementations
1:1 (kernel) threading
-
Scheduling might improve because kernel can schedule all threads in the process.
-
Context switch overhead for all threading operations.
Next: Thread Scheduling
-
We have discussed the mechanisms (context switching) used the multiplex the CPU…
-
and the abstraction (threads).
-
We will start talking about scheduling next week: the policies the ensure that the multiplexing makes the best use of the available system resources.