summaryrefslogtreecommitdiffstats
path: root/pintos-progos/notes/2.txt
diff options
context:
space:
mode:
Diffstat (limited to 'pintos-progos/notes/2.txt')
-rw-r--r--pintos-progos/notes/2.txt164
1 files changed, 164 insertions, 0 deletions
diff --git a/pintos-progos/notes/2.txt b/pintos-progos/notes/2.txt
new file mode 100644
index 0000000..c81b980
--- /dev/null
+++ b/pintos-progos/notes/2.txt
@@ -0,0 +1,164 @@
1Projekt 1 - Threads
2===================
3
4alarm clock
5-----------
6The simplest strategy is to maintain a wait list for
7all threads blocked for sleep.
8
9 * In 'timer_interrupt', check for threads which can be
10 unblocked from sleeping
11 * In 'sleep', set sleep timeout in thread, block the
12 thread and put it on the sleep list
13
14Notes:
15
16 * There are three places where a thread is added to the
17 ready list:
18 - thread_init
19 - thread_yield
20 - thread_unblock
21 * Iterate list with removal:
22 for (e = list_begin (&list); e != list_end (&list); )
23 if(...)
24 e = list_remove(e)->prev;
25 /* Unblock must be called AFTER removing, as thread.elem is reused */
26 else
27 e = list_next(e);
28
29Stats:
30
31 pintos/src/devices/timer.c | 40 ++++++++++++++++++++++++++++++++++++++--
32 pintos/src/threads/thread.h | 3 +++
33 2 files changed, 41 insertions(+), 2 deletions(-)
34
35 Design & Implementation time: 4 hours
36
37Priority Scheduler
38------------------
39
40A simple implementation of the priority scheduler (64 priority levels, round robin within
41one priority group).
42
43 * If a new task arrives with a higher priority, switch to this group
44 * If the currently active group is empty, search for the group with the next highest priority
45
46Notes:
47
48 * thread_{init,unblock,yield} now call thread_ready, which updates the lowest ready priority
49 * The thread_unblock operation does not yield a new thread immediately. Therefore, we need to check
50 later whether we need to switch to a higher priority thread (via thread_yield).
51 As thread_unblock is called with interrupts off, it seemed best to perform
52 this check when interrupts are enabled. This is only necessary if a higher priority task
53 is ready.
54 * First attempt passed alarm-priority, but failed to pass the priority-preempt test.
55 But the debugging facilities are fantastic, so it was easy to spot the problem
56 * Wolfgang suggested to yield a software interrupt when unblocking instead of modifying
57 interrupt_enable.
58
59Stats:
60
61 pintos/src/threads/interrupt.c | 3 +-
62 pintos/src/threads/thread.c | 60 ++++++++++++++++++++++++++++++++--------
63 pintos/src/threads/thread.h | 1 +
64 3 files changed, 51 insertions(+), 13 deletions(-)
65
66 Design and implementation time: 3 hours
67
68Priority Locks
69--------------
70
71We also need to select higher priority task first from locks, semaphores and condition variables.
72This easiest implementation searches for the thread with the highest priority in the wait queue.
73
74Notes:
75
76 * It is sufficient to implement the priority based selection twice, for sema_up and
77 cond_signal. cond_signal is a little bit harder, as we need to store the priority
78 (or the waiting thread) in the semaphore_elem type
79 * There are some handy list utility functions; in this case, list_max does a fine job
80 for both sema_up and cond_signal
81 * It is difficult to implement this in an efficient (sublinear) way, because priority donation
82 may boost a thread at any time!
83
84Stats:
85
86 pintos/src/threads/synch.c | 40 ++++++++++++++++++++++++++++++++++------
87 1 files changed, 34 insertions(+), 6 deletions(-)
88
89 Design and Implementation time: 1 hour
90
91Priority Donation
92-----------------
93If a thread aquires a lock, the lock holder needs to be boosted to the donated priority.
94We need to deal with nesting and chaining:
95
96 * Lock/Thread correspondence: Each lock is associated with at most one thread that holds it.
97 Therefore, donated priority can be associated with a lock.
98 * If a thread t wants to obtain a lock L, and a thread with a lower priority holds it,
99 the thread holding the lock is boosted to the priority of the requesting thread
100 * Chaining: If the boosted thread is also blocked on a lock, than we also need to donate
101 the priority to that lock, in a transitive way.
102 * Nesting: If a thread may hold more than one lock, we need to keep track of the donation
103 to each lock. When a lock is released or the static priority changes, the highest priority
104 donated to other locks is assigned to the thread.
105
106With this information, the following rules seem suitable (without proof of correctness):
107
108 * If thread T tries to aquire lock L
109
110 ==> if(L.owner)
111 T.locked_on = L
112 donate_priority (L, T.priority)
113 end
114 donate_priority (L ,p) :=
115 L.priority v= p
116 L.holder.priority v= p
117 donate_priority( L.holder.locked_on, p)
118 end
119
120 * If a thread T aquires a lock L
121
122 ==> L.holder = T
123 T.locks += L
124 T.lock_on = none
125
126
127 * If a thread T releases a lock L
128
129 ==> L.holder = none
130 T.locks -= L
131 T.priority = max (T.locks.priority, static_priority)
132
133To implement this, each thread needs to maintain a list of locks, a static priority,
134and a reference to the lock blocking it.
135
136Notes:
137
138 * Difficult to design, really a challenge.
139 Literature (short paper) could be helpful.
140 Maybe it would be interesting to ask for correctness proof sketches?
141
142 * First design was wrong, because I assumed locks are aquired in FIFO fashion
143 * First implementation failed to pass donate tests, due to a typo. Debugging
144 facilities are fantastic, no problem to spot this one.
145 * Next try, priority-donate-lower failed. Looking at the source code revealed that
146 we need to recompute the priority at the end of thread_set_priority.
147 * Next try, priority-donate-chain failed. Chaining is tricky to get right;
148 in my implementation, chained donations were lost after one lock was released.
149
150 * It would be an interesting option to ask for new test cases from the students
151 * I think it would also be a cool task to write a test for a RMS scheduling
152 scenario with blocking.
153
154Stats:
155
156 pintos/src/threads/synch.c | 15 ++++++++++++
157 pintos/src/threads/synch.h | 6 +++-
158 pintos/src/threads/thread.c | 53 +++++++++++++++++++++++++++++++++++++++++-
159 pintos/src/threads/thread.h | 9 ++++++-
160 pintos/src/utils/pintos | 2 +-
161
162 Design: 5h
163 Implementation: 3h
164