Root/
1 | Please note that the "What is RCU?" LWN series is an excellent place |
2 | to start learning about RCU: |
3 | |
4 | 1. What is RCU, Fundamentally? http://lwn.net/Articles/262464/ |
5 | 2. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/ |
6 | 3. RCU part 3: the RCU API http://lwn.net/Articles/264090/ |
7 | |
8 | |
9 | What is RCU? |
10 | |
11 | RCU is a synchronization mechanism that was added to the Linux kernel |
12 | during the 2.5 development effort that is optimized for read-mostly |
13 | situations. Although RCU is actually quite simple once you understand it, |
14 | getting there can sometimes be a challenge. Part of the problem is that |
15 | most of the past descriptions of RCU have been written with the mistaken |
16 | assumption that there is "one true way" to describe RCU. Instead, |
17 | the experience has been that different people must take different paths |
18 | to arrive at an understanding of RCU. This document provides several |
19 | different paths, as follows: |
20 | |
21 | 1. RCU OVERVIEW |
22 | 2. WHAT IS RCU'S CORE API? |
23 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? |
24 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? |
25 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? |
26 | 6. ANALOGY WITH READER-WRITER LOCKING |
27 | 7. FULL LIST OF RCU APIs |
28 | 8. ANSWERS TO QUICK QUIZZES |
29 | |
30 | People who prefer starting with a conceptual overview should focus on |
31 | Section 1, though most readers will profit by reading this section at |
32 | some point. People who prefer to start with an API that they can then |
33 | experiment with should focus on Section 2. People who prefer to start |
34 | with example uses should focus on Sections 3 and 4. People who need to |
35 | understand the RCU implementation should focus on Section 5, then dive |
36 | into the kernel source code. People who reason best by analogy should |
37 | focus on Section 6. Section 7 serves as an index to the docbook API |
38 | documentation, and Section 8 is the traditional answer key. |
39 | |
40 | So, start with the section that makes the most sense to you and your |
41 | preferred method of learning. If you need to know everything about |
42 | everything, feel free to read the whole thing -- but if you are really |
43 | that type of person, you have perused the source code and will therefore |
44 | never need this document anyway. ;-) |
45 | |
46 | |
47 | 1. RCU OVERVIEW |
48 | |
49 | The basic idea behind RCU is to split updates into "removal" and |
50 | "reclamation" phases. The removal phase removes references to data items |
51 | within a data structure (possibly by replacing them with references to |
52 | new versions of these data items), and can run concurrently with readers. |
53 | The reason that it is safe to run the removal phase concurrently with |
54 | readers is the semantics of modern CPUs guarantee that readers will see |
55 | either the old or the new version of the data structure rather than a |
56 | partially updated reference. The reclamation phase does the work of reclaiming |
57 | (e.g., freeing) the data items removed from the data structure during the |
58 | removal phase. Because reclaiming data items can disrupt any readers |
59 | concurrently referencing those data items, the reclamation phase must |
60 | not start until readers no longer hold references to those data items. |
61 | |
62 | Splitting the update into removal and reclamation phases permits the |
63 | updater to perform the removal phase immediately, and to defer the |
64 | reclamation phase until all readers active during the removal phase have |
65 | completed, either by blocking until they finish or by registering a |
66 | callback that is invoked after they finish. Only readers that are active |
67 | during the removal phase need be considered, because any reader starting |
68 | after the removal phase will be unable to gain a reference to the removed |
69 | data items, and therefore cannot be disrupted by the reclamation phase. |
70 | |
71 | So the typical RCU update sequence goes something like the following: |
72 | |
73 | a. Remove pointers to a data structure, so that subsequent |
74 | readers cannot gain a reference to it. |
75 | |
76 | b. Wait for all previous readers to complete their RCU read-side |
77 | critical sections. |
78 | |
79 | c. At this point, there cannot be any readers who hold references |
80 | to the data structure, so it now may safely be reclaimed |
81 | (e.g., kfree()d). |
82 | |
83 | Step (b) above is the key idea underlying RCU's deferred destruction. |
84 | The ability to wait until all readers are done allows RCU readers to |
85 | use much lighter-weight synchronization, in some cases, absolutely no |
86 | synchronization at all. In contrast, in more conventional lock-based |
87 | schemes, readers must use heavy-weight synchronization in order to |
88 | prevent an updater from deleting the data structure out from under them. |
89 | This is because lock-based updaters typically update data items in place, |
90 | and must therefore exclude readers. In contrast, RCU-based updaters |
91 | typically take advantage of the fact that writes to single aligned |
92 | pointers are atomic on modern CPUs, allowing atomic insertion, removal, |
93 | and replacement of data items in a linked structure without disrupting |
94 | readers. Concurrent RCU readers can then continue accessing the old |
95 | versions, and can dispense with the atomic operations, memory barriers, |
96 | and communications cache misses that are so expensive on present-day |
97 | SMP computer systems, even in absence of lock contention. |
98 | |
99 | In the three-step procedure shown above, the updater is performing both |
100 | the removal and the reclamation step, but it is often helpful for an |
101 | entirely different thread to do the reclamation, as is in fact the case |
102 | in the Linux kernel's directory-entry cache (dcache). Even if the same |
103 | thread performs both the update step (step (a) above) and the reclamation |
104 | step (step (c) above), it is often helpful to think of them separately. |
105 | For example, RCU readers and updaters need not communicate at all, |
106 | but RCU provides implicit low-overhead communication between readers |
107 | and reclaimers, namely, in step (b) above. |
108 | |
109 | So how the heck can a reclaimer tell when a reader is done, given |
110 | that readers are not doing any sort of synchronization operations??? |
111 | Read on to learn about how RCU's API makes this easy. |
112 | |
113 | |
114 | 2. WHAT IS RCU'S CORE API? |
115 | |
116 | The core RCU API is quite small: |
117 | |
118 | a. rcu_read_lock() |
119 | b. rcu_read_unlock() |
120 | c. synchronize_rcu() / call_rcu() |
121 | d. rcu_assign_pointer() |
122 | e. rcu_dereference() |
123 | |
124 | There are many other members of the RCU API, but the rest can be |
125 | expressed in terms of these five, though most implementations instead |
126 | express synchronize_rcu() in terms of the call_rcu() callback API. |
127 | |
128 | The five core RCU APIs are described below, the other 18 will be enumerated |
129 | later. See the kernel docbook documentation for more info, or look directly |
130 | at the function header comments. |
131 | |
132 | rcu_read_lock() |
133 | |
134 | void rcu_read_lock(void); |
135 | |
136 | Used by a reader to inform the reclaimer that the reader is |
137 | entering an RCU read-side critical section. It is illegal |
138 | to block while in an RCU read-side critical section, though |
139 | kernels built with CONFIG_TREE_PREEMPT_RCU can preempt RCU |
140 | read-side critical sections. Any RCU-protected data structure |
141 | accessed during an RCU read-side critical section is guaranteed to |
142 | remain unreclaimed for the full duration of that critical section. |
143 | Reference counts may be used in conjunction with RCU to maintain |
144 | longer-term references to data structures. |
145 | |
146 | rcu_read_unlock() |
147 | |
148 | void rcu_read_unlock(void); |
149 | |
150 | Used by a reader to inform the reclaimer that the reader is |
151 | exiting an RCU read-side critical section. Note that RCU |
152 | read-side critical sections may be nested and/or overlapping. |
153 | |
154 | synchronize_rcu() |
155 | |
156 | void synchronize_rcu(void); |
157 | |
158 | Marks the end of updater code and the beginning of reclaimer |
159 | code. It does this by blocking until all pre-existing RCU |
160 | read-side critical sections on all CPUs have completed. |
161 | Note that synchronize_rcu() will -not- necessarily wait for |
162 | any subsequent RCU read-side critical sections to complete. |
163 | For example, consider the following sequence of events: |
164 | |
165 | CPU 0 CPU 1 CPU 2 |
166 | ----------------- ------------------------- --------------- |
167 | 1. rcu_read_lock() |
168 | 2. enters synchronize_rcu() |
169 | 3. rcu_read_lock() |
170 | 4. rcu_read_unlock() |
171 | 5. exits synchronize_rcu() |
172 | 6. rcu_read_unlock() |
173 | |
174 | To reiterate, synchronize_rcu() waits only for ongoing RCU |
175 | read-side critical sections to complete, not necessarily for |
176 | any that begin after synchronize_rcu() is invoked. |
177 | |
178 | Of course, synchronize_rcu() does not necessarily return |
179 | -immediately- after the last pre-existing RCU read-side critical |
180 | section completes. For one thing, there might well be scheduling |
181 | delays. For another thing, many RCU implementations process |
182 | requests in batches in order to improve efficiencies, which can |
183 | further delay synchronize_rcu(). |
184 | |
185 | Since synchronize_rcu() is the API that must figure out when |
186 | readers are done, its implementation is key to RCU. For RCU |
187 | to be useful in all but the most read-intensive situations, |
188 | synchronize_rcu()'s overhead must also be quite small. |
189 | |
190 | The call_rcu() API is a callback form of synchronize_rcu(), |
191 | and is described in more detail in a later section. Instead of |
192 | blocking, it registers a function and argument which are invoked |
193 | after all ongoing RCU read-side critical sections have completed. |
194 | This callback variant is particularly useful in situations where |
195 | it is illegal to block or where update-side performance is |
196 | critically important. |
197 | |
198 | However, the call_rcu() API should not be used lightly, as use |
199 | of the synchronize_rcu() API generally results in simpler code. |
200 | In addition, the synchronize_rcu() API has the nice property |
201 | of automatically limiting update rate should grace periods |
202 | be delayed. This property results in system resilience in face |
203 | of denial-of-service attacks. Code using call_rcu() should limit |
204 | update rate in order to gain this same sort of resilience. See |
205 | checklist.txt for some approaches to limiting the update rate. |
206 | |
207 | rcu_assign_pointer() |
208 | |
209 | typeof(p) rcu_assign_pointer(p, typeof(p) v); |
210 | |
211 | Yes, rcu_assign_pointer() -is- implemented as a macro, though it |
212 | would be cool to be able to declare a function in this manner. |
213 | (Compiler experts will no doubt disagree.) |
214 | |
215 | The updater uses this function to assign a new value to an |
216 | RCU-protected pointer, in order to safely communicate the change |
217 | in value from the updater to the reader. This function returns |
218 | the new value, and also executes any memory-barrier instructions |
219 | required for a given CPU architecture. |
220 | |
221 | Perhaps just as important, it serves to document (1) which |
222 | pointers are protected by RCU and (2) the point at which a |
223 | given structure becomes accessible to other CPUs. That said, |
224 | rcu_assign_pointer() is most frequently used indirectly, via |
225 | the _rcu list-manipulation primitives such as list_add_rcu(). |
226 | |
227 | rcu_dereference() |
228 | |
229 | typeof(p) rcu_dereference(p); |
230 | |
231 | Like rcu_assign_pointer(), rcu_dereference() must be implemented |
232 | as a macro. |
233 | |
234 | The reader uses rcu_dereference() to fetch an RCU-protected |
235 | pointer, which returns a value that may then be safely |
236 | dereferenced. Note that rcu_deference() does not actually |
237 | dereference the pointer, instead, it protects the pointer for |
238 | later dereferencing. It also executes any needed memory-barrier |
239 | instructions for a given CPU architecture. Currently, only Alpha |
240 | needs memory barriers within rcu_dereference() -- on other CPUs, |
241 | it compiles to nothing, not even a compiler directive. |
242 | |
243 | Common coding practice uses rcu_dereference() to copy an |
244 | RCU-protected pointer to a local variable, then dereferences |
245 | this local variable, for example as follows: |
246 | |
247 | p = rcu_dereference(head.next); |
248 | return p->data; |
249 | |
250 | However, in this case, one could just as easily combine these |
251 | into one statement: |
252 | |
253 | return rcu_dereference(head.next)->data; |
254 | |
255 | If you are going to be fetching multiple fields from the |
256 | RCU-protected structure, using the local variable is of |
257 | course preferred. Repeated rcu_dereference() calls look |
258 | ugly and incur unnecessary overhead on Alpha CPUs. |
259 | |
260 | Note that the value returned by rcu_dereference() is valid |
261 | only within the enclosing RCU read-side critical section. |
262 | For example, the following is -not- legal: |
263 | |
264 | rcu_read_lock(); |
265 | p = rcu_dereference(head.next); |
266 | rcu_read_unlock(); |
267 | x = p->address; |
268 | rcu_read_lock(); |
269 | y = p->data; |
270 | rcu_read_unlock(); |
271 | |
272 | Holding a reference from one RCU read-side critical section |
273 | to another is just as illegal as holding a reference from |
274 | one lock-based critical section to another! Similarly, |
275 | using a reference outside of the critical section in which |
276 | it was acquired is just as illegal as doing so with normal |
277 | locking. |
278 | |
279 | As with rcu_assign_pointer(), an important function of |
280 | rcu_dereference() is to document which pointers are protected by |
281 | RCU, in particular, flagging a pointer that is subject to changing |
282 | at any time, including immediately after the rcu_dereference(). |
283 | And, again like rcu_assign_pointer(), rcu_dereference() is |
284 | typically used indirectly, via the _rcu list-manipulation |
285 | primitives, such as list_for_each_entry_rcu(). |
286 | |
287 | The following diagram shows how each API communicates among the |
288 | reader, updater, and reclaimer. |
289 | |
290 | |
291 | rcu_assign_pointer() |
292 | +--------+ |
293 | +---------------------->| reader |---------+ |
294 | | +--------+ | |
295 | | | | |
296 | | | | Protect: |
297 | | | | rcu_read_lock() |
298 | | | | rcu_read_unlock() |
299 | | rcu_dereference() | | |
300 | +---------+ | | |
301 | | updater |<---------------------+ | |
302 | +---------+ V |
303 | | +-----------+ |
304 | +----------------------------------->| reclaimer | |
305 | +-----------+ |
306 | Defer: |
307 | synchronize_rcu() & call_rcu() |
308 | |
309 | |
310 | The RCU infrastructure observes the time sequence of rcu_read_lock(), |
311 | rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in |
312 | order to determine when (1) synchronize_rcu() invocations may return |
313 | to their callers and (2) call_rcu() callbacks may be invoked. Efficient |
314 | implementations of the RCU infrastructure make heavy use of batching in |
315 | order to amortize their overhead over many uses of the corresponding APIs. |
316 | |
317 | There are no fewer than three RCU mechanisms in the Linux kernel; the |
318 | diagram above shows the first one, which is by far the most commonly used. |
319 | The rcu_dereference() and rcu_assign_pointer() primitives are used for |
320 | all three mechanisms, but different defer and protect primitives are |
321 | used as follows: |
322 | |
323 | Defer Protect |
324 | |
325 | a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock() |
326 | call_rcu() rcu_dereference() |
327 | |
328 | b. call_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh() |
329 | rcu_dereference_bh() |
330 | |
331 | c. synchronize_sched() rcu_read_lock_sched() / rcu_read_unlock_sched() |
332 | preempt_disable() / preempt_enable() |
333 | local_irq_save() / local_irq_restore() |
334 | hardirq enter / hardirq exit |
335 | NMI enter / NMI exit |
336 | rcu_dereference_sched() |
337 | |
338 | These three mechanisms are used as follows: |
339 | |
340 | a. RCU applied to normal data structures. |
341 | |
342 | b. RCU applied to networking data structures that may be subjected |
343 | to remote denial-of-service attacks. |
344 | |
345 | c. RCU applied to scheduler and interrupt/NMI-handler tasks. |
346 | |
347 | Again, most uses will be of (a). The (b) and (c) cases are important |
348 | for specialized uses, but are relatively uncommon. |
349 | |
350 | |
351 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? |
352 | |
353 | This section shows a simple use of the core RCU API to protect a |
354 | global pointer to a dynamically allocated structure. More-typical |
355 | uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt. |
356 | |
357 | struct foo { |
358 | int a; |
359 | char b; |
360 | long c; |
361 | }; |
362 | DEFINE_SPINLOCK(foo_mutex); |
363 | |
364 | struct foo *gbl_foo; |
365 | |
366 | /* |
367 | * Create a new struct foo that is the same as the one currently |
368 | * pointed to by gbl_foo, except that field "a" is replaced |
369 | * with "new_a". Points gbl_foo to the new structure, and |
370 | * frees up the old structure after a grace period. |
371 | * |
372 | * Uses rcu_assign_pointer() to ensure that concurrent readers |
373 | * see the initialized version of the new structure. |
374 | * |
375 | * Uses synchronize_rcu() to ensure that any readers that might |
376 | * have references to the old structure complete before freeing |
377 | * the old structure. |
378 | */ |
379 | void foo_update_a(int new_a) |
380 | { |
381 | struct foo *new_fp; |
382 | struct foo *old_fp; |
383 | |
384 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
385 | spin_lock(&foo_mutex); |
386 | old_fp = gbl_foo; |
387 | *new_fp = *old_fp; |
388 | new_fp->a = new_a; |
389 | rcu_assign_pointer(gbl_foo, new_fp); |
390 | spin_unlock(&foo_mutex); |
391 | synchronize_rcu(); |
392 | kfree(old_fp); |
393 | } |
394 | |
395 | /* |
396 | * Return the value of field "a" of the current gbl_foo |
397 | * structure. Use rcu_read_lock() and rcu_read_unlock() |
398 | * to ensure that the structure does not get deleted out |
399 | * from under us, and use rcu_dereference() to ensure that |
400 | * we see the initialized version of the structure (important |
401 | * for DEC Alpha and for people reading the code). |
402 | */ |
403 | int foo_get_a(void) |
404 | { |
405 | int retval; |
406 | |
407 | rcu_read_lock(); |
408 | retval = rcu_dereference(gbl_foo)->a; |
409 | rcu_read_unlock(); |
410 | return retval; |
411 | } |
412 | |
413 | So, to sum up: |
414 | |
415 | o Use rcu_read_lock() and rcu_read_unlock() to guard RCU |
416 | read-side critical sections. |
417 | |
418 | o Within an RCU read-side critical section, use rcu_dereference() |
419 | to dereference RCU-protected pointers. |
420 | |
421 | o Use some solid scheme (such as locks or semaphores) to |
422 | keep concurrent updates from interfering with each other. |
423 | |
424 | o Use rcu_assign_pointer() to update an RCU-protected pointer. |
425 | This primitive protects concurrent readers from the updater, |
426 | -not- concurrent updates from each other! You therefore still |
427 | need to use locking (or something similar) to keep concurrent |
428 | rcu_assign_pointer() primitives from interfering with each other. |
429 | |
430 | o Use synchronize_rcu() -after- removing a data element from an |
431 | RCU-protected data structure, but -before- reclaiming/freeing |
432 | the data element, in order to wait for the completion of all |
433 | RCU read-side critical sections that might be referencing that |
434 | data item. |
435 | |
436 | See checklist.txt for additional rules to follow when using RCU. |
437 | And again, more-typical uses of RCU may be found in listRCU.txt, |
438 | arrayRCU.txt, and NMI-RCU.txt. |
439 | |
440 | |
441 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? |
442 | |
443 | In the example above, foo_update_a() blocks until a grace period elapses. |
444 | This is quite simple, but in some cases one cannot afford to wait so |
445 | long -- there might be other high-priority work to be done. |
446 | |
447 | In such cases, one uses call_rcu() rather than synchronize_rcu(). |
448 | The call_rcu() API is as follows: |
449 | |
450 | void call_rcu(struct rcu_head * head, |
451 | void (*func)(struct rcu_head *head)); |
452 | |
453 | This function invokes func(head) after a grace period has elapsed. |
454 | This invocation might happen from either softirq or process context, |
455 | so the function is not permitted to block. The foo struct needs to |
456 | have an rcu_head structure added, perhaps as follows: |
457 | |
458 | struct foo { |
459 | int a; |
460 | char b; |
461 | long c; |
462 | struct rcu_head rcu; |
463 | }; |
464 | |
465 | The foo_update_a() function might then be written as follows: |
466 | |
467 | /* |
468 | * Create a new struct foo that is the same as the one currently |
469 | * pointed to by gbl_foo, except that field "a" is replaced |
470 | * with "new_a". Points gbl_foo to the new structure, and |
471 | * frees up the old structure after a grace period. |
472 | * |
473 | * Uses rcu_assign_pointer() to ensure that concurrent readers |
474 | * see the initialized version of the new structure. |
475 | * |
476 | * Uses call_rcu() to ensure that any readers that might have |
477 | * references to the old structure complete before freeing the |
478 | * old structure. |
479 | */ |
480 | void foo_update_a(int new_a) |
481 | { |
482 | struct foo *new_fp; |
483 | struct foo *old_fp; |
484 | |
485 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
486 | spin_lock(&foo_mutex); |
487 | old_fp = gbl_foo; |
488 | *new_fp = *old_fp; |
489 | new_fp->a = new_a; |
490 | rcu_assign_pointer(gbl_foo, new_fp); |
491 | spin_unlock(&foo_mutex); |
492 | call_rcu(&old_fp->rcu, foo_reclaim); |
493 | } |
494 | |
495 | The foo_reclaim() function might appear as follows: |
496 | |
497 | void foo_reclaim(struct rcu_head *rp) |
498 | { |
499 | struct foo *fp = container_of(rp, struct foo, rcu); |
500 | |
501 | kfree(fp); |
502 | } |
503 | |
504 | The container_of() primitive is a macro that, given a pointer into a |
505 | struct, the type of the struct, and the pointed-to field within the |
506 | struct, returns a pointer to the beginning of the struct. |
507 | |
508 | The use of call_rcu() permits the caller of foo_update_a() to |
509 | immediately regain control, without needing to worry further about the |
510 | old version of the newly updated element. It also clearly shows the |
511 | RCU distinction between updater, namely foo_update_a(), and reclaimer, |
512 | namely foo_reclaim(). |
513 | |
514 | The summary of advice is the same as for the previous section, except |
515 | that we are now using call_rcu() rather than synchronize_rcu(): |
516 | |
517 | o Use call_rcu() -after- removing a data element from an |
518 | RCU-protected data structure in order to register a callback |
519 | function that will be invoked after the completion of all RCU |
520 | read-side critical sections that might be referencing that |
521 | data item. |
522 | |
523 | Again, see checklist.txt for additional rules governing the use of RCU. |
524 | |
525 | |
526 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? |
527 | |
528 | One of the nice things about RCU is that it has extremely simple "toy" |
529 | implementations that are a good first step towards understanding the |
530 | production-quality implementations in the Linux kernel. This section |
531 | presents two such "toy" implementations of RCU, one that is implemented |
532 | in terms of familiar locking primitives, and another that more closely |
533 | resembles "classic" RCU. Both are way too simple for real-world use, |
534 | lacking both functionality and performance. However, they are useful |
535 | in getting a feel for how RCU works. See kernel/rcupdate.c for a |
536 | production-quality implementation, and see: |
537 | |
538 | http://www.rdrop.com/users/paulmck/RCU |
539 | |
540 | for papers describing the Linux kernel RCU implementation. The OLS'01 |
541 | and OLS'02 papers are a good introduction, and the dissertation provides |
542 | more details on the current implementation as of early 2004. |
543 | |
544 | |
545 | 5A. "TOY" IMPLEMENTATION #1: LOCKING |
546 | |
547 | This section presents a "toy" RCU implementation that is based on |
548 | familiar locking primitives. Its overhead makes it a non-starter for |
549 | real-life use, as does its lack of scalability. It is also unsuitable |
550 | for realtime use, since it allows scheduling latency to "bleed" from |
551 | one read-side critical section to another. |
552 | |
553 | However, it is probably the easiest implementation to relate to, so is |
554 | a good starting point. |
555 | |
556 | It is extremely simple: |
557 | |
558 | static DEFINE_RWLOCK(rcu_gp_mutex); |
559 | |
560 | void rcu_read_lock(void) |
561 | { |
562 | read_lock(&rcu_gp_mutex); |
563 | } |
564 | |
565 | void rcu_read_unlock(void) |
566 | { |
567 | read_unlock(&rcu_gp_mutex); |
568 | } |
569 | |
570 | void synchronize_rcu(void) |
571 | { |
572 | write_lock(&rcu_gp_mutex); |
573 | write_unlock(&rcu_gp_mutex); |
574 | } |
575 | |
576 | [You can ignore rcu_assign_pointer() and rcu_dereference() without |
577 | missing much. But here they are anyway. And whatever you do, don't |
578 | forget about them when submitting patches making use of RCU!] |
579 | |
580 | #define rcu_assign_pointer(p, v) ({ \ |
581 | smp_wmb(); \ |
582 | (p) = (v); \ |
583 | }) |
584 | |
585 | #define rcu_dereference(p) ({ \ |
586 | typeof(p) _________p1 = p; \ |
587 | smp_read_barrier_depends(); \ |
588 | (_________p1); \ |
589 | }) |
590 | |
591 | |
592 | The rcu_read_lock() and rcu_read_unlock() primitive read-acquire |
593 | and release a global reader-writer lock. The synchronize_rcu() |
594 | primitive write-acquires this same lock, then immediately releases |
595 | it. This means that once synchronize_rcu() exits, all RCU read-side |
596 | critical sections that were in progress before synchronize_rcu() was |
597 | called are guaranteed to have completed -- there is no way that |
598 | synchronize_rcu() would have been able to write-acquire the lock |
599 | otherwise. |
600 | |
601 | It is possible to nest rcu_read_lock(), since reader-writer locks may |
602 | be recursively acquired. Note also that rcu_read_lock() is immune |
603 | from deadlock (an important property of RCU). The reason for this is |
604 | that the only thing that can block rcu_read_lock() is a synchronize_rcu(). |
605 | But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, |
606 | so there can be no deadlock cycle. |
607 | |
608 | Quick Quiz #1: Why is this argument naive? How could a deadlock |
609 | occur when using this algorithm in a real-world Linux |
610 | kernel? How could this deadlock be avoided? |
611 | |
612 | |
613 | 5B. "TOY" EXAMPLE #2: CLASSIC RCU |
614 | |
615 | This section presents a "toy" RCU implementation that is based on |
616 | "classic RCU". It is also short on performance (but only for updates) and |
617 | on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT |
618 | kernels. The definitions of rcu_dereference() and rcu_assign_pointer() |
619 | are the same as those shown in the preceding section, so they are omitted. |
620 | |
621 | void rcu_read_lock(void) { } |
622 | |
623 | void rcu_read_unlock(void) { } |
624 | |
625 | void synchronize_rcu(void) |
626 | { |
627 | int cpu; |
628 | |
629 | for_each_possible_cpu(cpu) |
630 | run_on(cpu); |
631 | } |
632 | |
633 | Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. |
634 | This is the great strength of classic RCU in a non-preemptive kernel: |
635 | read-side overhead is precisely zero, at least on non-Alpha CPUs. |
636 | And there is absolutely no way that rcu_read_lock() can possibly |
637 | participate in a deadlock cycle! |
638 | |
639 | The implementation of synchronize_rcu() simply schedules itself on each |
640 | CPU in turn. The run_on() primitive can be implemented straightforwardly |
641 | in terms of the sched_setaffinity() primitive. Of course, a somewhat less |
642 | "toy" implementation would restore the affinity upon completion rather |
643 | than just leaving all tasks running on the last CPU, but when I said |
644 | "toy", I meant -toy-! |
645 | |
646 | So how the heck is this supposed to work??? |
647 | |
648 | Remember that it is illegal to block while in an RCU read-side critical |
649 | section. Therefore, if a given CPU executes a context switch, we know |
650 | that it must have completed all preceding RCU read-side critical sections. |
651 | Once -all- CPUs have executed a context switch, then -all- preceding |
652 | RCU read-side critical sections will have completed. |
653 | |
654 | So, suppose that we remove a data item from its structure and then invoke |
655 | synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed |
656 | that there are no RCU read-side critical sections holding a reference |
657 | to that data item, so we can safely reclaim it. |
658 | |
659 | Quick Quiz #2: Give an example where Classic RCU's read-side |
660 | overhead is -negative-. |
661 | |
662 | Quick Quiz #3: If it is illegal to block in an RCU read-side |
663 | critical section, what the heck do you do in |
664 | PREEMPT_RT, where normal spinlocks can block??? |
665 | |
666 | |
667 | 6. ANALOGY WITH READER-WRITER LOCKING |
668 | |
669 | Although RCU can be used in many different ways, a very common use of |
670 | RCU is analogous to reader-writer locking. The following unified |
671 | diff shows how closely related RCU and reader-writer locking can be. |
672 | |
673 | @@ -13,15 +14,15 @@ |
674 | struct list_head *lp; |
675 | struct el *p; |
676 | |
677 | - read_lock(); |
678 | - list_for_each_entry(p, head, lp) { |
679 | + rcu_read_lock(); |
680 | + list_for_each_entry_rcu(p, head, lp) { |
681 | if (p->key == key) { |
682 | *result = p->data; |
683 | - read_unlock(); |
684 | + rcu_read_unlock(); |
685 | return 1; |
686 | } |
687 | } |
688 | - read_unlock(); |
689 | + rcu_read_unlock(); |
690 | return 0; |
691 | } |
692 | |
693 | @@ -29,15 +30,16 @@ |
694 | { |
695 | struct el *p; |
696 | |
697 | - write_lock(&listmutex); |
698 | + spin_lock(&listmutex); |
699 | list_for_each_entry(p, head, lp) { |
700 | if (p->key == key) { |
701 | - list_del(&p->list); |
702 | - write_unlock(&listmutex); |
703 | + list_del_rcu(&p->list); |
704 | + spin_unlock(&listmutex); |
705 | + synchronize_rcu(); |
706 | kfree(p); |
707 | return 1; |
708 | } |
709 | } |
710 | - write_unlock(&listmutex); |
711 | + spin_unlock(&listmutex); |
712 | return 0; |
713 | } |
714 | |
715 | Or, for those who prefer a side-by-side listing: |
716 | |
717 | 1 struct el { 1 struct el { |
718 | 2 struct list_head list; 2 struct list_head list; |
719 | 3 long key; 3 long key; |
720 | 4 spinlock_t mutex; 4 spinlock_t mutex; |
721 | 5 int data; 5 int data; |
722 | 6 /* Other data fields */ 6 /* Other data fields */ |
723 | 7 }; 7 }; |
724 | 8 spinlock_t listmutex; 8 spinlock_t listmutex; |
725 | 9 struct el head; 9 struct el head; |
726 | |
727 | 1 int search(long key, int *result) 1 int search(long key, int *result) |
728 | 2 { 2 { |
729 | 3 struct list_head *lp; 3 struct list_head *lp; |
730 | 4 struct el *p; 4 struct el *p; |
731 | 5 5 |
732 | 6 read_lock(); 6 rcu_read_lock(); |
733 | 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { |
734 | 8 if (p->key == key) { 8 if (p->key == key) { |
735 | 9 *result = p->data; 9 *result = p->data; |
736 | 10 read_unlock(); 10 rcu_read_unlock(); |
737 | 11 return 1; 11 return 1; |
738 | 12 } 12 } |
739 | 13 } 13 } |
740 | 14 read_unlock(); 14 rcu_read_unlock(); |
741 | 15 return 0; 15 return 0; |
742 | 16 } 16 } |
743 | |
744 | 1 int delete(long key) 1 int delete(long key) |
745 | 2 { 2 { |
746 | 3 struct el *p; 3 struct el *p; |
747 | 4 4 |
748 | 5 write_lock(&listmutex); 5 spin_lock(&listmutex); |
749 | 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { |
750 | 7 if (p->key == key) { 7 if (p->key == key) { |
751 | 8 list_del(&p->list); 8 list_del_rcu(&p->list); |
752 | 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); |
753 | 10 synchronize_rcu(); |
754 | 10 kfree(p); 11 kfree(p); |
755 | 11 return 1; 12 return 1; |
756 | 12 } 13 } |
757 | 13 } 14 } |
758 | 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); |
759 | 15 return 0; 16 return 0; |
760 | 16 } 17 } |
761 | |
762 | Either way, the differences are quite small. Read-side locking moves |
763 | to rcu_read_lock() and rcu_read_unlock, update-side locking moves from |
764 | a reader-writer lock to a simple spinlock, and a synchronize_rcu() |
765 | precedes the kfree(). |
766 | |
767 | However, there is one potential catch: the read-side and update-side |
768 | critical sections can now run concurrently. In many cases, this will |
769 | not be a problem, but it is necessary to check carefully regardless. |
770 | For example, if multiple independent list updates must be seen as |
771 | a single atomic update, converting to RCU will require special care. |
772 | |
773 | Also, the presence of synchronize_rcu() means that the RCU version of |
774 | delete() can now block. If this is a problem, there is a callback-based |
775 | mechanism that never blocks, namely call_rcu(), that can be used in |
776 | place of synchronize_rcu(). |
777 | |
778 | |
779 | 7. FULL LIST OF RCU APIs |
780 | |
781 | The RCU APIs are documented in docbook-format header comments in the |
782 | Linux-kernel source code, but it helps to have a full list of the |
783 | APIs, since there does not appear to be a way to categorize them |
784 | in docbook. Here is the list, by category. |
785 | |
786 | RCU list traversal: |
787 | |
788 | list_for_each_entry_rcu |
789 | hlist_for_each_entry_rcu |
790 | hlist_nulls_for_each_entry_rcu |
791 | |
792 | list_for_each_continue_rcu (to be deprecated in favor of new |
793 | list_for_each_entry_continue_rcu) |
794 | |
795 | RCU pointer/list update: |
796 | |
797 | rcu_assign_pointer |
798 | list_add_rcu |
799 | list_add_tail_rcu |
800 | list_del_rcu |
801 | list_replace_rcu |
802 | hlist_del_rcu |
803 | hlist_add_after_rcu |
804 | hlist_add_before_rcu |
805 | hlist_add_head_rcu |
806 | hlist_replace_rcu |
807 | list_splice_init_rcu() |
808 | |
809 | RCU: Critical sections Grace period Barrier |
810 | |
811 | rcu_read_lock synchronize_net rcu_barrier |
812 | rcu_read_unlock synchronize_rcu |
813 | rcu_dereference synchronize_rcu_expedited |
814 | call_rcu |
815 | |
816 | |
817 | bh: Critical sections Grace period Barrier |
818 | |
819 | rcu_read_lock_bh call_rcu_bh rcu_barrier_bh |
820 | rcu_read_unlock_bh synchronize_rcu_bh |
821 | rcu_dereference_bh synchronize_rcu_bh_expedited |
822 | |
823 | |
824 | sched: Critical sections Grace period Barrier |
825 | |
826 | rcu_read_lock_sched synchronize_sched rcu_barrier_sched |
827 | rcu_read_unlock_sched call_rcu_sched |
828 | [preempt_disable] synchronize_sched_expedited |
829 | [and friends] |
830 | rcu_dereference_sched |
831 | |
832 | |
833 | SRCU: Critical sections Grace period Barrier |
834 | |
835 | srcu_read_lock synchronize_srcu N/A |
836 | srcu_read_unlock synchronize_srcu_expedited |
837 | srcu_dereference |
838 | |
839 | SRCU: Initialization/cleanup |
840 | init_srcu_struct |
841 | cleanup_srcu_struct |
842 | |
843 | All: lockdep-checked RCU-protected pointer access |
844 | |
845 | rcu_dereference_check |
846 | rcu_dereference_protected |
847 | rcu_access_pointer |
848 | |
849 | See the comment headers in the source code (or the docbook generated |
850 | from them) for more information. |
851 | |
852 | However, given that there are no fewer than four families of RCU APIs |
853 | in the Linux kernel, how do you choose which one to use? The following |
854 | list can be helpful: |
855 | |
856 | a. Will readers need to block? If so, you need SRCU. |
857 | |
858 | b. What about the -rt patchset? If readers would need to block |
859 | in an non-rt kernel, you need SRCU. If readers would block |
860 | in a -rt kernel, but not in a non-rt kernel, SRCU is not |
861 | necessary. |
862 | |
863 | c. Do you need to treat NMI handlers, hardirq handlers, |
864 | and code segments with preemption disabled (whether |
865 | via preempt_disable(), local_irq_save(), local_bh_disable(), |
866 | or some other mechanism) as if they were explicit RCU readers? |
867 | If so, you need RCU-sched. |
868 | |
869 | d. Do you need RCU grace periods to complete even in the face |
870 | of softirq monopolization of one or more of the CPUs? For |
871 | example, is your code subject to network-based denial-of-service |
872 | attacks? If so, you need RCU-bh. |
873 | |
874 | e. Is your workload too update-intensive for normal use of |
875 | RCU, but inappropriate for other synchronization mechanisms? |
876 | If so, consider SLAB_DESTROY_BY_RCU. But please be careful! |
877 | |
878 | f. Otherwise, use RCU. |
879 | |
880 | Of course, this all assumes that you have determined that RCU is in fact |
881 | the right tool for your job. |
882 | |
883 | |
884 | 8. ANSWERS TO QUICK QUIZZES |
885 | |
886 | Quick Quiz #1: Why is this argument naive? How could a deadlock |
887 | occur when using this algorithm in a real-world Linux |
888 | kernel? [Referring to the lock-based "toy" RCU |
889 | algorithm.] |
890 | |
891 | Answer: Consider the following sequence of events: |
892 | |
893 | 1. CPU 0 acquires some unrelated lock, call it |
894 | "problematic_lock", disabling irq via |
895 | spin_lock_irqsave(). |
896 | |
897 | 2. CPU 1 enters synchronize_rcu(), write-acquiring |
898 | rcu_gp_mutex. |
899 | |
900 | 3. CPU 0 enters rcu_read_lock(), but must wait |
901 | because CPU 1 holds rcu_gp_mutex. |
902 | |
903 | 4. CPU 1 is interrupted, and the irq handler |
904 | attempts to acquire problematic_lock. |
905 | |
906 | The system is now deadlocked. |
907 | |
908 | One way to avoid this deadlock is to use an approach like |
909 | that of CONFIG_PREEMPT_RT, where all normal spinlocks |
910 | become blocking locks, and all irq handlers execute in |
911 | the context of special tasks. In this case, in step 4 |
912 | above, the irq handler would block, allowing CPU 1 to |
913 | release rcu_gp_mutex, avoiding the deadlock. |
914 | |
915 | Even in the absence of deadlock, this RCU implementation |
916 | allows latency to "bleed" from readers to other |
917 | readers through synchronize_rcu(). To see this, |
918 | consider task A in an RCU read-side critical section |
919 | (thus read-holding rcu_gp_mutex), task B blocked |
920 | attempting to write-acquire rcu_gp_mutex, and |
921 | task C blocked in rcu_read_lock() attempting to |
922 | read_acquire rcu_gp_mutex. Task A's RCU read-side |
923 | latency is holding up task C, albeit indirectly via |
924 | task B. |
925 | |
926 | Realtime RCU implementations therefore use a counter-based |
927 | approach where tasks in RCU read-side critical sections |
928 | cannot be blocked by tasks executing synchronize_rcu(). |
929 | |
930 | Quick Quiz #2: Give an example where Classic RCU's read-side |
931 | overhead is -negative-. |
932 | |
933 | Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT |
934 | kernel where a routing table is used by process-context |
935 | code, but can be updated by irq-context code (for example, |
936 | by an "ICMP REDIRECT" packet). The usual way of handling |
937 | this would be to have the process-context code disable |
938 | interrupts while searching the routing table. Use of |
939 | RCU allows such interrupt-disabling to be dispensed with. |
940 | Thus, without RCU, you pay the cost of disabling interrupts, |
941 | and with RCU you don't. |
942 | |
943 | One can argue that the overhead of RCU in this |
944 | case is negative with respect to the single-CPU |
945 | interrupt-disabling approach. Others might argue that |
946 | the overhead of RCU is merely zero, and that replacing |
947 | the positive overhead of the interrupt-disabling scheme |
948 | with the zero-overhead RCU scheme does not constitute |
949 | negative overhead. |
950 | |
951 | In real life, of course, things are more complex. But |
952 | even the theoretical possibility of negative overhead for |
953 | a synchronization primitive is a bit unexpected. ;-) |
954 | |
955 | Quick Quiz #3: If it is illegal to block in an RCU read-side |
956 | critical section, what the heck do you do in |
957 | PREEMPT_RT, where normal spinlocks can block??? |
958 | |
959 | Answer: Just as PREEMPT_RT permits preemption of spinlock |
960 | critical sections, it permits preemption of RCU |
961 | read-side critical sections. It also permits |
962 | spinlocks blocking while in RCU read-side critical |
963 | sections. |
964 | |
965 | Why the apparent inconsistency? Because it is it |
966 | possible to use priority boosting to keep the RCU |
967 | grace periods short if need be (for example, if running |
968 | short of memory). In contrast, if blocking waiting |
969 | for (say) network reception, there is no way to know |
970 | what should be boosted. Especially given that the |
971 | process we need to boost might well be a human being |
972 | who just went out for a pizza or something. And although |
973 | a computer-operated cattle prod might arouse serious |
974 | interest, it might also provoke serious objections. |
975 | Besides, how does the computer know what pizza parlor |
976 | the human being went to??? |
977 | |
978 | |
979 | ACKNOWLEDGEMENTS |
980 | |
981 | My thanks to the people who helped make this human-readable, including |
982 | Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. |
983 | |
984 | |
985 | For more information, see http://www.rdrop.com/users/paulmck/RCU. |
986 |
Branches:
ben-wpan
ben-wpan-stefan
javiroman/ks7010
jz-2.6.34
jz-2.6.34-rc5
jz-2.6.34-rc6
jz-2.6.34-rc7
jz-2.6.35
jz-2.6.36
jz-2.6.37
jz-2.6.38
jz-2.6.39
jz-3.0
jz-3.1
jz-3.11
jz-3.12
jz-3.13
jz-3.15
jz-3.16
jz-3.18-dt
jz-3.2
jz-3.3
jz-3.4
jz-3.5
jz-3.6
jz-3.6-rc2-pwm
jz-3.9
jz-3.9-clk
jz-3.9-rc8
jz47xx
jz47xx-2.6.38
master
Tags:
od-2011-09-04
od-2011-09-18
v2.6.34-rc5
v2.6.34-rc6
v2.6.34-rc7
v3.9