Root/
1 | ============= |
2 | CFS Scheduler |
3 | ============= |
4 | |
5 | |
6 | 1. OVERVIEW |
7 | |
8 | CFS stands for "Completely Fair Scheduler," and is the new "desktop" process |
9 | scheduler implemented by Ingo Molnar and merged in Linux 2.6.23. It is the |
10 | replacement for the previous vanilla scheduler's SCHED_OTHER interactivity |
11 | code. |
12 | |
13 | 80% of CFS's design can be summed up in a single sentence: CFS basically models |
14 | an "ideal, precise multi-tasking CPU" on real hardware. |
15 | |
16 | "Ideal multi-tasking CPU" is a (non-existent :-)) CPU that has 100% physical |
17 | power and which can run each task at precise equal speed, in parallel, each at |
18 | 1/nr_running speed. For example: if there are 2 tasks running, then it runs |
19 | each at 50% physical power --- i.e., actually in parallel. |
20 | |
21 | On real hardware, we can run only a single task at once, so we have to |
22 | introduce the concept of "virtual runtime." The virtual runtime of a task |
23 | specifies when its next timeslice would start execution on the ideal |
24 | multi-tasking CPU described above. In practice, the virtual runtime of a task |
25 | is its actual runtime normalized to the total number of running tasks. |
26 | |
27 | |
28 | |
29 | 2. FEW IMPLEMENTATION DETAILS |
30 | |
31 | In CFS the virtual runtime is expressed and tracked via the per-task |
32 | p->se.vruntime (nanosec-unit) value. This way, it's possible to accurately |
33 | timestamp and measure the "expected CPU time" a task should have gotten. |
34 | |
35 | [ small detail: on "ideal" hardware, at any time all tasks would have the same |
36 | p->se.vruntime value --- i.e., tasks would execute simultaneously and no task |
37 | would ever get "out of balance" from the "ideal" share of CPU time. ] |
38 | |
39 | CFS's task picking logic is based on this p->se.vruntime value and it is thus |
40 | very simple: it always tries to run the task with the smallest p->se.vruntime |
41 | value (i.e., the task which executed least so far). CFS always tries to split |
42 | up CPU time between runnable tasks as close to "ideal multitasking hardware" as |
43 | possible. |
44 | |
45 | Most of the rest of CFS's design just falls out of this really simple concept, |
46 | with a few add-on embellishments like nice levels, multiprocessing and various |
47 | algorithm variants to recognize sleepers. |
48 | |
49 | |
50 | |
51 | 3. THE RBTREE |
52 | |
53 | CFS's design is quite radical: it does not use the old data structures for the |
54 | runqueues, but it uses a time-ordered rbtree to build a "timeline" of future |
55 | task execution, and thus has no "array switch" artifacts (by which both the |
56 | previous vanilla scheduler and RSDL/SD are affected). |
57 | |
58 | CFS also maintains the rq->cfs.min_vruntime value, which is a monotonic |
59 | increasing value tracking the smallest vruntime among all tasks in the |
60 | runqueue. The total amount of work done by the system is tracked using |
61 | min_vruntime; that value is used to place newly activated entities on the left |
62 | side of the tree as much as possible. |
63 | |
64 | The total number of running tasks in the runqueue is accounted through the |
65 | rq->cfs.load value, which is the sum of the weights of the tasks queued on the |
66 | runqueue. |
67 | |
68 | CFS maintains a time-ordered rbtree, where all runnable tasks are sorted by the |
69 | p->se.vruntime key (there is a subtraction using rq->cfs.min_vruntime to |
70 | account for possible wraparounds). CFS picks the "leftmost" task from this |
71 | tree and sticks to it. |
72 | As the system progresses forwards, the executed tasks are put into the tree |
73 | more and more to the right --- slowly but surely giving a chance for every task |
74 | to become the "leftmost task" and thus get on the CPU within a deterministic |
75 | amount of time. |
76 | |
77 | Summing up, CFS works like this: it runs a task a bit, and when the task |
78 | schedules (or a scheduler tick happens) the task's CPU usage is "accounted |
79 | for": the (small) time it just spent using the physical CPU is added to |
80 | p->se.vruntime. Once p->se.vruntime gets high enough so that another task |
81 | becomes the "leftmost task" of the time-ordered rbtree it maintains (plus a |
82 | small amount of "granularity" distance relative to the leftmost task so that we |
83 | do not over-schedule tasks and trash the cache), then the new leftmost task is |
84 | picked and the current task is preempted. |
85 | |
86 | |
87 | |
88 | 4. SOME FEATURES OF CFS |
89 | |
90 | CFS uses nanosecond granularity accounting and does not rely on any jiffies or |
91 | other HZ detail. Thus the CFS scheduler has no notion of "timeslices" in the |
92 | way the previous scheduler had, and has no heuristics whatsoever. There is |
93 | only one central tunable (you have to switch on CONFIG_SCHED_DEBUG): |
94 | |
95 | /proc/sys/kernel/sched_min_granularity_ns |
96 | |
97 | which can be used to tune the scheduler from "desktop" (i.e., low latencies) to |
98 | "server" (i.e., good batching) workloads. It defaults to a setting suitable |
99 | for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too. |
100 | |
101 | Due to its design, the CFS scheduler is not prone to any of the "attacks" that |
102 | exist today against the heuristics of the stock scheduler: fiftyp.c, thud.c, |
103 | chew.c, ring-test.c, massive_intr.c all work fine and do not impact |
104 | interactivity and produce the expected behavior. |
105 | |
106 | The CFS scheduler has a much stronger handling of nice levels and SCHED_BATCH |
107 | than the previous vanilla scheduler: both types of workloads are isolated much |
108 | more aggressively. |
109 | |
110 | SMP load-balancing has been reworked/sanitized: the runqueue-walking |
111 | assumptions are gone from the load-balancing code now, and iterators of the |
112 | scheduling modules are used. The balancing code got quite a bit simpler as a |
113 | result. |
114 | |
115 | |
116 | |
117 | 5. Scheduling policies |
118 | |
119 | CFS implements three scheduling policies: |
120 | |
121 | - SCHED_NORMAL (traditionally called SCHED_OTHER): The scheduling |
122 | policy that is used for regular tasks. |
123 | |
124 | - SCHED_BATCH: Does not preempt nearly as often as regular tasks |
125 | would, thereby allowing tasks to run longer and make better use of |
126 | caches but at the cost of interactivity. This is well suited for |
127 | batch jobs. |
128 | |
129 | - SCHED_IDLE: This is even weaker than nice 19, but its not a true |
130 | idle timer scheduler in order to avoid to get into priority |
131 | inversion problems which would deadlock the machine. |
132 | |
133 | SCHED_FIFO/_RR are implemented in sched_rt.c and are as specified by |
134 | POSIX. |
135 | |
136 | The command chrt from util-linux-ng 2.13.1.1 can set all of these except |
137 | SCHED_IDLE. |
138 | |
139 | |
140 | |
141 | 6. SCHEDULING CLASSES |
142 | |
143 | The new CFS scheduler has been designed in such a way to introduce "Scheduling |
144 | Classes," an extensible hierarchy of scheduler modules. These modules |
145 | encapsulate scheduling policy details and are handled by the scheduler core |
146 | without the core code assuming too much about them. |
147 | |
148 | sched_fair.c implements the CFS scheduler described above. |
149 | |
150 | sched_rt.c implements SCHED_FIFO and SCHED_RR semantics, in a simpler way than |
151 | the previous vanilla scheduler did. It uses 100 runqueues (for all 100 RT |
152 | priority levels, instead of 140 in the previous scheduler) and it needs no |
153 | expired array. |
154 | |
155 | Scheduling classes are implemented through the sched_class structure, which |
156 | contains hooks to functions that must be called whenever an interesting event |
157 | occurs. |
158 | |
159 | This is the (partial) list of the hooks: |
160 | |
161 | - enqueue_task(...) |
162 | |
163 | Called when a task enters a runnable state. |
164 | It puts the scheduling entity (task) into the red-black tree and |
165 | increments the nr_running variable. |
166 | |
167 | - dequeue_tree(...) |
168 | |
169 | When a task is no longer runnable, this function is called to keep the |
170 | corresponding scheduling entity out of the red-black tree. It decrements |
171 | the nr_running variable. |
172 | |
173 | - yield_task(...) |
174 | |
175 | This function is basically just a dequeue followed by an enqueue, unless the |
176 | compat_yield sysctl is turned on; in that case, it places the scheduling |
177 | entity at the right-most end of the red-black tree. |
178 | |
179 | - check_preempt_curr(...) |
180 | |
181 | This function checks if a task that entered the runnable state should |
182 | preempt the currently running task. |
183 | |
184 | - pick_next_task(...) |
185 | |
186 | This function chooses the most appropriate task eligible to run next. |
187 | |
188 | - set_curr_task(...) |
189 | |
190 | This function is called when a task changes its scheduling class or changes |
191 | its task group. |
192 | |
193 | - task_tick(...) |
194 | |
195 | This function is mostly called from time tick functions; it might lead to |
196 | process switch. This drives the running preemption. |
197 | |
198 | - task_new(...) |
199 | |
200 | The core scheduler gives the scheduling module an opportunity to manage new |
201 | task startup. The CFS scheduling module uses it for group scheduling, while |
202 | the scheduling module for a real-time task does not use it. |
203 | |
204 | |
205 | |
206 | 7. GROUP SCHEDULER EXTENSIONS TO CFS |
207 | |
208 | Normally, the scheduler operates on individual tasks and strives to provide |
209 | fair CPU time to each task. Sometimes, it may be desirable to group tasks and |
210 | provide fair CPU time to each such task group. For example, it may be |
211 | desirable to first provide fair CPU time to each user on the system and then to |
212 | each task belonging to a user. |
213 | |
214 | CONFIG_GROUP_SCHED strives to achieve exactly that. It lets tasks to be |
215 | grouped and divides CPU time fairly among such groups. |
216 | |
217 | CONFIG_RT_GROUP_SCHED permits to group real-time (i.e., SCHED_FIFO and |
218 | SCHED_RR) tasks. |
219 | |
220 | CONFIG_FAIR_GROUP_SCHED permits to group CFS (i.e., SCHED_NORMAL and |
221 | SCHED_BATCH) tasks. |
222 | |
223 | At present, there are two (mutually exclusive) mechanisms to group tasks for |
224 | CPU bandwidth control purposes: |
225 | |
226 | - Based on user id (CONFIG_USER_SCHED) |
227 | |
228 | With this option, tasks are grouped according to their user id. |
229 | |
230 | - Based on "cgroup" pseudo filesystem (CONFIG_CGROUP_SCHED) |
231 | |
232 | This options needs CONFIG_CGROUPS to be defined, and lets the administrator |
233 | create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See |
234 | Documentation/cgroups/cgroups.txt for more information about this filesystem. |
235 | |
236 | Only one of these options to group tasks can be chosen and not both. |
237 | |
238 | When CONFIG_USER_SCHED is defined, a directory is created in sysfs for each new |
239 | user and a "cpu_share" file is added in that directory. |
240 | |
241 | # cd /sys/kernel/uids |
242 | # cat 512/cpu_share # Display user 512's CPU share |
243 | 1024 |
244 | # echo 2048 > 512/cpu_share # Modify user 512's CPU share |
245 | # cat 512/cpu_share # Display user 512's CPU share |
246 | 2048 |
247 | # |
248 | |
249 | CPU bandwidth between two users is divided in the ratio of their CPU shares. |
250 | For example: if you would like user "root" to get twice the bandwidth of user |
251 | "guest," then set the cpu_share for both the users such that "root"'s cpu_share |
252 | is twice "guest"'s cpu_share. |
253 | |
254 | When CONFIG_CGROUP_SCHED is defined, a "cpu.shares" file is created for each |
255 | group created using the pseudo filesystem. See example steps below to create |
256 | task groups and modify their CPU share using the "cgroups" pseudo filesystem. |
257 | |
258 | # mkdir /dev/cpuctl |
259 | # mount -t cgroup -ocpu none /dev/cpuctl |
260 | # cd /dev/cpuctl |
261 | |
262 | # mkdir multimedia # create "multimedia" group of tasks |
263 | # mkdir browser # create "browser" group of tasks |
264 | |
265 | # #Configure the multimedia group to receive twice the CPU bandwidth |
266 | # #that of browser group |
267 | |
268 | # echo 2048 > multimedia/cpu.shares |
269 | # echo 1024 > browser/cpu.shares |
270 | |
271 | # firefox & # Launch firefox and move it to "browser" group |
272 | # echo <firefox_pid> > browser/tasks |
273 | |
274 | # #Launch gmplayer (or your favourite movie player) |
275 | # echo <movie_player_pid> > multimedia/tasks |
276 | |
277 | 8. Implementation note: user namespaces |
278 | |
279 | User namespaces are intended to be hierarchical. But they are currently |
280 | only partially implemented. Each of those has ramifications for CFS. |
281 | |
282 | First, since user namespaces are hierarchical, the /sys/kernel/uids |
283 | presentation is inadequate. Eventually we will likely want to use sysfs |
284 | tagging to provide private views of /sys/kernel/uids within each user |
285 | namespace. |
286 | |
287 | Second, the hierarchical nature is intended to support completely |
288 | unprivileged use of user namespaces. So if using user groups, then |
289 | we want the users in a user namespace to be children of the user |
290 | who created it. |
291 | |
292 | That is currently unimplemented. So instead, every user in a new |
293 | user namespace will receive 1024 shares just like any user in the |
294 | initial user namespace. Note that at the moment creation of a new |
295 | user namespace requires each of CAP_SYS_ADMIN, CAP_SETUID, and |
296 | CAP_SETGID. |
297 |
Branches:
ben-wpan
ben-wpan-stefan
javiroman/ks7010
jz-2.6.34
jz-2.6.34-rc5
jz-2.6.34-rc6
jz-2.6.34-rc7
jz-2.6.35
jz-2.6.36
jz-2.6.37
jz-2.6.38
jz-2.6.39
jz-3.0
jz-3.1
jz-3.11
jz-3.12
jz-3.13
jz-3.15
jz-3.16
jz-3.18-dt
jz-3.2
jz-3.3
jz-3.4
jz-3.5
jz-3.6
jz-3.6-rc2-pwm
jz-3.9
jz-3.9-clk
jz-3.9-rc8
jz47xx
jz47xx-2.6.38
master
Tags:
od-2011-09-04
od-2011-09-18
v2.6.34-rc5
v2.6.34-rc6
v2.6.34-rc7
v3.9