Threads and Thread Implementations
Threads
-
Registers
-
Stack
-
Registers
-
Stack
-
Memory
-
File descriptor table.
Why Use Threads?
-
Threads can be a good way of thinking about applications that do multiple things "simultaneously."
-
Threads may naturally encapsulate some data about a certain thing that the application is doing.
-
Threads may help applications hide or parallelize delays caused by slow devices.
Threads v. Processes Part II
Good example from Wikipedia: multiple threads within a single process are like multiple cooks trying to prepare the same meal together.
-
Each one is doing one thing.
-
They are probably doing different things.
-
They all share the same recipe but may be looking at different parts of it.
-
They have private state but can communicate easily.
-
They must coordinate!
Aside: Threads v. Events
-
While threads are a reasonable way of thinking about concurrent programming, they are not the only way—or even always the best way—to make use of system resources.
-
Another approach is known as event-driven programming.
-
Anyone who has done JavaScript development or used frameworks like node.js has grown familiar with this programming model.
-
Threads can block, so we make use of the CPU by switching between threads!
-
Event handlers cannot block, so we can make use of the CPU by simply running events until completion.
Naturally Multithreaded Applications
-
Use a separate thread to handle each incoming request.
-
Separate threads for each open tab.
-
When loading a page, separate threads to request and receive each unique part of the page.
-
Divide-and-conquer "embarrassingly parallelizable" data sets.
Why Not Processes?
-
IPC is more difficult because the kernel tries to protect processes from each other.
-
Inside a single process, anything goes!
-
-
State (what?) associated with processes doesn’t scale well.
Implementing Threads
-
This is the M:1 threading model, M user threads that look like 1 thread to the operating system kernel.
-
This is the 1:1 threading model.
Implementing Threads in User Space
-
Doesn’t involve multiplexing between processes so no kernel privilege required!
-
Save and restore context?
-
Preempt other threads?
Aside: setjmp()
/longjmp()
Wizardry
What will the following code do?
|
-
Use these tricks to impress your (new) friends!
-
(Or get rid of old ones…)
Comparing Threading Implementations
M:1 (user space) threading
-
Threading operations are much faster because they do not have to cross the user/kernel boundary.
-
Thread state can be smaller.
-
Can’t use multiple cores!
-
Operating system may not schedule the application correctly because it doesn’t know about the fact that it contains more than one thread.
-
A single thread may block the entire process in the kernel when there are other threads that could run.
1:1 (kernel) threading
-
Scheduling might improve because kernel can schedule all threads in the process.
-
Context switch overhead for all threading operations.