Root/
Source at commit fbf123cd4cc0c097fe9a99c90109ebb2a5e94a50 created 10 years 3 months ago. By Lars-Peter Clausen, dma: jz4740: Dequeue descriptor from active list before completing it | |
---|---|
1 | Semantics and Behavior of Atomic and |
2 | Bitmask Operations |
3 | |
4 | David S. Miller |
5 | |
6 | This document is intended to serve as a guide to Linux port |
7 | maintainers on how to implement atomic counter, bitops, and spinlock |
8 | interfaces properly. |
9 | |
10 | The atomic_t type should be defined as a signed integer. |
11 | Also, it should be made opaque such that any kind of cast to a normal |
12 | C integer type will fail. Something like the following should |
13 | suffice: |
14 | |
15 | typedef struct { int counter; } atomic_t; |
16 | |
17 | Historically, counter has been declared volatile. This is now discouraged. |
18 | See Documentation/volatile-considered-harmful.txt for the complete rationale. |
19 | |
20 | local_t is very similar to atomic_t. If the counter is per CPU and only |
21 | updated by one CPU, local_t is probably more appropriate. Please see |
22 | Documentation/local_ops.txt for the semantics of local_t. |
23 | |
24 | The first operations to implement for atomic_t's are the initializers and |
25 | plain reads. |
26 | |
27 | #define ATOMIC_INIT(i) { (i) } |
28 | #define atomic_set(v, i) ((v)->counter = (i)) |
29 | |
30 | The first macro is used in definitions, such as: |
31 | |
32 | static atomic_t my_counter = ATOMIC_INIT(1); |
33 | |
34 | The initializer is atomic in that the return values of the atomic operations |
35 | are guaranteed to be correct reflecting the initialized value if the |
36 | initializer is used before runtime. If the initializer is used at runtime, a |
37 | proper implicit or explicit read memory barrier is needed before reading the |
38 | value with atomic_read from another thread. |
39 | |
40 | The second interface can be used at runtime, as in: |
41 | |
42 | struct foo { atomic_t counter; }; |
43 | ... |
44 | |
45 | struct foo *k; |
46 | |
47 | k = kmalloc(sizeof(*k), GFP_KERNEL); |
48 | if (!k) |
49 | return -ENOMEM; |
50 | atomic_set(&k->counter, 0); |
51 | |
52 | The setting is atomic in that the return values of the atomic operations by |
53 | all threads are guaranteed to be correct reflecting either the value that has |
54 | been set with this operation or set with another operation. A proper implicit |
55 | or explicit memory barrier is needed before the value set with the operation |
56 | is guaranteed to be readable with atomic_read from another thread. |
57 | |
58 | Next, we have: |
59 | |
60 | #define atomic_read(v) ((v)->counter) |
61 | |
62 | which simply reads the counter value currently visible to the calling thread. |
63 | The read is atomic in that the return value is guaranteed to be one of the |
64 | values initialized or modified with the interface operations if a proper |
65 | implicit or explicit memory barrier is used after possible runtime |
66 | initialization by any other thread and the value is modified only with the |
67 | interface operations. atomic_read does not guarantee that the runtime |
68 | initialization by any other thread is visible yet, so the user of the |
69 | interface must take care of that with a proper implicit or explicit memory |
70 | barrier. |
71 | |
72 | *** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! *** |
73 | |
74 | Some architectures may choose to use the volatile keyword, barriers, or inline |
75 | assembly to guarantee some degree of immediacy for atomic_read() and |
76 | atomic_set(). This is not uniformly guaranteed, and may change in the future, |
77 | so all users of atomic_t should treat atomic_read() and atomic_set() as simple |
78 | C statements that may be reordered or optimized away entirely by the compiler |
79 | or processor, and explicitly invoke the appropriate compiler and/or memory |
80 | barrier for each use case. Failure to do so will result in code that may |
81 | suddenly break when used with different architectures or compiler |
82 | optimizations, or even changes in unrelated code which changes how the |
83 | compiler optimizes the section accessing atomic_t variables. |
84 | |
85 | *** YOU HAVE BEEN WARNED! *** |
86 | |
87 | Properly aligned pointers, longs, ints, and chars (and unsigned |
88 | equivalents) may be atomically loaded from and stored to in the same |
89 | sense as described for atomic_read() and atomic_set(). The ACCESS_ONCE() |
90 | macro should be used to prevent the compiler from using optimizations |
91 | that might otherwise optimize accesses out of existence on the one hand, |
92 | or that might create unsolicited accesses on the other. |
93 | |
94 | For example consider the following code: |
95 | |
96 | while (a > 0) |
97 | do_something(); |
98 | |
99 | If the compiler can prove that do_something() does not store to the |
100 | variable a, then the compiler is within its rights transforming this to |
101 | the following: |
102 | |
103 | tmp = a; |
104 | if (a > 0) |
105 | for (;;) |
106 | do_something(); |
107 | |
108 | If you don't want the compiler to do this (and you probably don't), then |
109 | you should use something like the following: |
110 | |
111 | while (ACCESS_ONCE(a) < 0) |
112 | do_something(); |
113 | |
114 | Alternatively, you could place a barrier() call in the loop. |
115 | |
116 | For another example, consider the following code: |
117 | |
118 | tmp_a = a; |
119 | do_something_with(tmp_a); |
120 | do_something_else_with(tmp_a); |
121 | |
122 | If the compiler can prove that do_something_with() does not store to the |
123 | variable a, then the compiler is within its rights to manufacture an |
124 | additional load as follows: |
125 | |
126 | tmp_a = a; |
127 | do_something_with(tmp_a); |
128 | tmp_a = a; |
129 | do_something_else_with(tmp_a); |
130 | |
131 | This could fatally confuse your code if it expected the same value |
132 | to be passed to do_something_with() and do_something_else_with(). |
133 | |
134 | The compiler would be likely to manufacture this additional load if |
135 | do_something_with() was an inline function that made very heavy use |
136 | of registers: reloading from variable a could save a flush to the |
137 | stack and later reload. To prevent the compiler from attacking your |
138 | code in this manner, write the following: |
139 | |
140 | tmp_a = ACCESS_ONCE(a); |
141 | do_something_with(tmp_a); |
142 | do_something_else_with(tmp_a); |
143 | |
144 | For a final example, consider the following code, assuming that the |
145 | variable a is set at boot time before the second CPU is brought online |
146 | and never changed later, so that memory barriers are not needed: |
147 | |
148 | if (a) |
149 | b = 9; |
150 | else |
151 | b = 42; |
152 | |
153 | The compiler is within its rights to manufacture an additional store |
154 | by transforming the above code into the following: |
155 | |
156 | b = 42; |
157 | if (a) |
158 | b = 9; |
159 | |
160 | This could come as a fatal surprise to other code running concurrently |
161 | that expected b to never have the value 42 if a was zero. To prevent |
162 | the compiler from doing this, write something like: |
163 | |
164 | if (a) |
165 | ACCESS_ONCE(b) = 9; |
166 | else |
167 | ACCESS_ONCE(b) = 42; |
168 | |
169 | Don't even -think- about doing this without proper use of memory barriers, |
170 | locks, or atomic operations if variable a can change at runtime! |
171 | |
172 | *** WARNING: ACCESS_ONCE() DOES NOT IMPLY A BARRIER! *** |
173 | |
174 | Now, we move onto the atomic operation interfaces typically implemented with |
175 | the help of assembly code. |
176 | |
177 | void atomic_add(int i, atomic_t *v); |
178 | void atomic_sub(int i, atomic_t *v); |
179 | void atomic_inc(atomic_t *v); |
180 | void atomic_dec(atomic_t *v); |
181 | |
182 | These four routines add and subtract integral values to/from the given |
183 | atomic_t value. The first two routines pass explicit integers by |
184 | which to make the adjustment, whereas the latter two use an implicit |
185 | adjustment value of "1". |
186 | |
187 | One very important aspect of these two routines is that they DO NOT |
188 | require any explicit memory barriers. They need only perform the |
189 | atomic_t counter update in an SMP safe manner. |
190 | |
191 | Next, we have: |
192 | |
193 | int atomic_inc_return(atomic_t *v); |
194 | int atomic_dec_return(atomic_t *v); |
195 | |
196 | These routines add 1 and subtract 1, respectively, from the given |
197 | atomic_t and return the new counter value after the operation is |
198 | performed. |
199 | |
200 | Unlike the above routines, it is required that explicit memory |
201 | barriers are performed before and after the operation. It must be |
202 | done such that all memory operations before and after the atomic |
203 | operation calls are strongly ordered with respect to the atomic |
204 | operation itself. |
205 | |
206 | For example, it should behave as if a smp_mb() call existed both |
207 | before and after the atomic operation. |
208 | |
209 | If the atomic instructions used in an implementation provide explicit |
210 | memory barrier semantics which satisfy the above requirements, that is |
211 | fine as well. |
212 | |
213 | Let's move on: |
214 | |
215 | int atomic_add_return(int i, atomic_t *v); |
216 | int atomic_sub_return(int i, atomic_t *v); |
217 | |
218 | These behave just like atomic_{inc,dec}_return() except that an |
219 | explicit counter adjustment is given instead of the implicit "1". |
220 | This means that like atomic_{inc,dec}_return(), the memory barrier |
221 | semantics are required. |
222 | |
223 | Next: |
224 | |
225 | int atomic_inc_and_test(atomic_t *v); |
226 | int atomic_dec_and_test(atomic_t *v); |
227 | |
228 | These two routines increment and decrement by 1, respectively, the |
229 | given atomic counter. They return a boolean indicating whether the |
230 | resulting counter value was zero or not. |
231 | |
232 | It requires explicit memory barrier semantics around the operation as |
233 | above. |
234 | |
235 | int atomic_sub_and_test(int i, atomic_t *v); |
236 | |
237 | This is identical to atomic_dec_and_test() except that an explicit |
238 | decrement is given instead of the implicit "1". It requires explicit |
239 | memory barrier semantics around the operation. |
240 | |
241 | int atomic_add_negative(int i, atomic_t *v); |
242 | |
243 | The given increment is added to the given atomic counter value. A |
244 | boolean is return which indicates whether the resulting counter value |
245 | is negative. It requires explicit memory barrier semantics around the |
246 | operation. |
247 | |
248 | Then: |
249 | |
250 | int atomic_xchg(atomic_t *v, int new); |
251 | |
252 | This performs an atomic exchange operation on the atomic variable v, setting |
253 | the given new value. It returns the old value that the atomic variable v had |
254 | just before the operation. |
255 | |
256 | atomic_xchg requires explicit memory barriers around the operation. |
257 | |
258 | int atomic_cmpxchg(atomic_t *v, int old, int new); |
259 | |
260 | This performs an atomic compare exchange operation on the atomic value v, |
261 | with the given old and new values. Like all atomic_xxx operations, |
262 | atomic_cmpxchg will only satisfy its atomicity semantics as long as all |
263 | other accesses of *v are performed through atomic_xxx operations. |
264 | |
265 | atomic_cmpxchg requires explicit memory barriers around the operation. |
266 | |
267 | The semantics for atomic_cmpxchg are the same as those defined for 'cas' |
268 | below. |
269 | |
270 | Finally: |
271 | |
272 | int atomic_add_unless(atomic_t *v, int a, int u); |
273 | |
274 | If the atomic value v is not equal to u, this function adds a to v, and |
275 | returns non zero. If v is equal to u then it returns zero. This is done as |
276 | an atomic operation. |
277 | |
278 | atomic_add_unless requires explicit memory barriers around the operation |
279 | unless it fails (returns 0). |
280 | |
281 | atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) |
282 | |
283 | |
284 | If a caller requires memory barrier semantics around an atomic_t |
285 | operation which does not return a value, a set of interfaces are |
286 | defined which accomplish this: |
287 | |
288 | void smp_mb__before_atomic(void); |
289 | void smp_mb__after_atomic(void); |
290 | |
291 | For example, smp_mb__before_atomic() can be used like so: |
292 | |
293 | obj->dead = 1; |
294 | smp_mb__before_atomic(); |
295 | atomic_dec(&obj->ref_count); |
296 | |
297 | It makes sure that all memory operations preceding the atomic_dec() |
298 | call are strongly ordered with respect to the atomic counter |
299 | operation. In the above example, it guarantees that the assignment of |
300 | "1" to obj->dead will be globally visible to other cpus before the |
301 | atomic counter decrement. |
302 | |
303 | Without the explicit smp_mb__before_atomic() call, the |
304 | implementation could legally allow the atomic counter update visible |
305 | to other cpus before the "obj->dead = 1;" assignment. |
306 | |
307 | A missing memory barrier in the cases where they are required by the |
308 | atomic_t implementation above can have disastrous results. Here is |
309 | an example, which follows a pattern occurring frequently in the Linux |
310 | kernel. It is the use of atomic counters to implement reference |
311 | counting, and it works such that once the counter falls to zero it can |
312 | be guaranteed that no other entity can be accessing the object: |
313 | |
314 | static void obj_list_add(struct obj *obj, struct list_head *head) |
315 | { |
316 | obj->active = 1; |
317 | list_add(&obj->list, head); |
318 | } |
319 | |
320 | static void obj_list_del(struct obj *obj) |
321 | { |
322 | list_del(&obj->list); |
323 | obj->active = 0; |
324 | } |
325 | |
326 | static void obj_destroy(struct obj *obj) |
327 | { |
328 | BUG_ON(obj->active); |
329 | kfree(obj); |
330 | } |
331 | |
332 | struct obj *obj_list_peek(struct list_head *head) |
333 | { |
334 | if (!list_empty(head)) { |
335 | struct obj *obj; |
336 | |
337 | obj = list_entry(head->next, struct obj, list); |
338 | atomic_inc(&obj->refcnt); |
339 | return obj; |
340 | } |
341 | return NULL; |
342 | } |
343 | |
344 | void obj_poke(void) |
345 | { |
346 | struct obj *obj; |
347 | |
348 | spin_lock(&global_list_lock); |
349 | obj = obj_list_peek(&global_list); |
350 | spin_unlock(&global_list_lock); |
351 | |
352 | if (obj) { |
353 | obj->ops->poke(obj); |
354 | if (atomic_dec_and_test(&obj->refcnt)) |
355 | obj_destroy(obj); |
356 | } |
357 | } |
358 | |
359 | void obj_timeout(struct obj *obj) |
360 | { |
361 | spin_lock(&global_list_lock); |
362 | obj_list_del(obj); |
363 | spin_unlock(&global_list_lock); |
364 | |
365 | if (atomic_dec_and_test(&obj->refcnt)) |
366 | obj_destroy(obj); |
367 | } |
368 | |
369 | (This is a simplification of the ARP queue management in the |
370 | generic neighbour discover code of the networking. Olaf Kirch |
371 | found a bug wrt. memory barriers in kfree_skb() that exposed |
372 | the atomic_t memory barrier requirements quite clearly.) |
373 | |
374 | Given the above scheme, it must be the case that the obj->active |
375 | update done by the obj list deletion be visible to other processors |
376 | before the atomic counter decrement is performed. |
377 | |
378 | Otherwise, the counter could fall to zero, yet obj->active would still |
379 | be set, thus triggering the assertion in obj_destroy(). The error |
380 | sequence looks like this: |
381 | |
382 | cpu 0 cpu 1 |
383 | obj_poke() obj_timeout() |
384 | obj = obj_list_peek(); |
385 | ... gains ref to obj, refcnt=2 |
386 | obj_list_del(obj); |
387 | obj->active = 0 ... |
388 | ... visibility delayed ... |
389 | atomic_dec_and_test() |
390 | ... refcnt drops to 1 ... |
391 | atomic_dec_and_test() |
392 | ... refcount drops to 0 ... |
393 | obj_destroy() |
394 | BUG() triggers since obj->active |
395 | still seen as one |
396 | obj->active update visibility occurs |
397 | |
398 | With the memory barrier semantics required of the atomic_t operations |
399 | which return values, the above sequence of memory visibility can never |
400 | happen. Specifically, in the above case the atomic_dec_and_test() |
401 | counter decrement would not become globally visible until the |
402 | obj->active update does. |
403 | |
404 | As a historical note, 32-bit Sparc used to only allow usage of |
405 | 24-bits of its atomic_t type. This was because it used 8 bits |
406 | as a spinlock for SMP safety. Sparc32 lacked a "compare and swap" |
407 | type instruction. However, 32-bit Sparc has since been moved over |
408 | to a "hash table of spinlocks" scheme, that allows the full 32-bit |
409 | counter to be realized. Essentially, an array of spinlocks are |
410 | indexed into based upon the address of the atomic_t being operated |
411 | on, and that lock protects the atomic operation. Parisc uses the |
412 | same scheme. |
413 | |
414 | Another note is that the atomic_t operations returning values are |
415 | extremely slow on an old 386. |
416 | |
417 | We will now cover the atomic bitmask operations. You will find that |
418 | their SMP and memory barrier semantics are similar in shape and scope |
419 | to the atomic_t ops above. |
420 | |
421 | Native atomic bit operations are defined to operate on objects aligned |
422 | to the size of an "unsigned long" C data type, and are least of that |
423 | size. The endianness of the bits within each "unsigned long" are the |
424 | native endianness of the cpu. |
425 | |
426 | void set_bit(unsigned long nr, volatile unsigned long *addr); |
427 | void clear_bit(unsigned long nr, volatile unsigned long *addr); |
428 | void change_bit(unsigned long nr, volatile unsigned long *addr); |
429 | |
430 | These routines set, clear, and change, respectively, the bit number |
431 | indicated by "nr" on the bit mask pointed to by "ADDR". |
432 | |
433 | They must execute atomically, yet there are no implicit memory barrier |
434 | semantics required of these interfaces. |
435 | |
436 | int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); |
437 | int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); |
438 | int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); |
439 | |
440 | Like the above, except that these routines return a boolean which |
441 | indicates whether the changed bit was set _BEFORE_ the atomic bit |
442 | operation. |
443 | |
444 | WARNING! It is incredibly important that the value be a boolean, |
445 | ie. "0" or "1". Do not try to be fancy and save a few instructions by |
446 | declaring the above to return "long" and just returning something like |
447 | "old_val & mask" because that will not work. |
448 | |
449 | For one thing, this return value gets truncated to int in many code |
450 | paths using these interfaces, so on 64-bit if the bit is set in the |
451 | upper 32-bits then testers will never see that. |
452 | |
453 | One great example of where this problem crops up are the thread_info |
454 | flag operations. Routines such as test_and_set_ti_thread_flag() chop |
455 | the return value into an int. There are other places where things |
456 | like this occur as well. |
457 | |
458 | These routines, like the atomic_t counter operations returning values, |
459 | require explicit memory barrier semantics around their execution. All |
460 | memory operations before the atomic bit operation call must be made |
461 | visible globally before the atomic bit operation is made visible. |
462 | Likewise, the atomic bit operation must be visible globally before any |
463 | subsequent memory operation is made visible. For example: |
464 | |
465 | obj->dead = 1; |
466 | if (test_and_set_bit(0, &obj->flags)) |
467 | /* ... */; |
468 | obj->killed = 1; |
469 | |
470 | The implementation of test_and_set_bit() must guarantee that |
471 | "obj->dead = 1;" is visible to cpus before the atomic memory operation |
472 | done by test_and_set_bit() becomes visible. Likewise, the atomic |
473 | memory operation done by test_and_set_bit() must become visible before |
474 | "obj->killed = 1;" is visible. |
475 | |
476 | Finally there is the basic operation: |
477 | |
478 | int test_bit(unsigned long nr, __const__ volatile unsigned long *addr); |
479 | |
480 | Which returns a boolean indicating if bit "nr" is set in the bitmask |
481 | pointed to by "addr". |
482 | |
483 | If explicit memory barriers are required around {set,clear}_bit() (which do |
484 | not return a value, and thus does not need to provide memory barrier |
485 | semantics), two interfaces are provided: |
486 | |
487 | void smp_mb__before_atomic(void); |
488 | void smp_mb__after_atomic(void); |
489 | |
490 | They are used as follows, and are akin to their atomic_t operation |
491 | brothers: |
492 | |
493 | /* All memory operations before this call will |
494 | * be globally visible before the clear_bit(). |
495 | */ |
496 | smp_mb__before_atomic(); |
497 | clear_bit( ... ); |
498 | |
499 | /* The clear_bit() will be visible before all |
500 | * subsequent memory operations. |
501 | */ |
502 | smp_mb__after_atomic(); |
503 | |
504 | There are two special bitops with lock barrier semantics (acquire/release, |
505 | same as spinlocks). These operate in the same way as their non-_lock/unlock |
506 | postfixed variants, except that they are to provide acquire/release semantics, |
507 | respectively. This means they can be used for bit_spin_trylock and |
508 | bit_spin_unlock type operations without specifying any more barriers. |
509 | |
510 | int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); |
511 | void clear_bit_unlock(unsigned long nr, unsigned long *addr); |
512 | void __clear_bit_unlock(unsigned long nr, unsigned long *addr); |
513 | |
514 | The __clear_bit_unlock version is non-atomic, however it still implements |
515 | unlock barrier semantics. This can be useful if the lock itself is protecting |
516 | the other bits in the word. |
517 | |
518 | Finally, there are non-atomic versions of the bitmask operations |
519 | provided. They are used in contexts where some other higher-level SMP |
520 | locking scheme is being used to protect the bitmask, and thus less |
521 | expensive non-atomic operations may be used in the implementation. |
522 | They have names similar to the above bitmask operation interfaces, |
523 | except that two underscores are prefixed to the interface name. |
524 | |
525 | void __set_bit(unsigned long nr, volatile unsigned long *addr); |
526 | void __clear_bit(unsigned long nr, volatile unsigned long *addr); |
527 | void __change_bit(unsigned long nr, volatile unsigned long *addr); |
528 | int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr); |
529 | int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); |
530 | int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr); |
531 | |
532 | These non-atomic variants also do not require any special memory |
533 | barrier semantics. |
534 | |
535 | The routines xchg() and cmpxchg() need the same exact memory barriers |
536 | as the atomic and bit operations returning values. |
537 | |
538 | Spinlocks and rwlocks have memory barrier expectations as well. |
539 | The rule to follow is simple: |
540 | |
541 | 1) When acquiring a lock, the implementation must make it globally |
542 | visible before any subsequent memory operation. |
543 | |
544 | 2) When releasing a lock, the implementation must make it such that |
545 | all previous memory operations are globally visible before the |
546 | lock release. |
547 | |
548 | Which finally brings us to _atomic_dec_and_lock(). There is an |
549 | architecture-neutral version implemented in lib/dec_and_lock.c, |
550 | but most platforms will wish to optimize this in assembler. |
551 | |
552 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); |
553 | |
554 | Atomically decrement the given counter, and if will drop to zero |
555 | atomically acquire the given spinlock and perform the decrement |
556 | of the counter to zero. If it does not drop to zero, do nothing |
557 | with the spinlock. |
558 | |
559 | It is actually pretty simple to get the memory barrier correct. |
560 | Simply satisfy the spinlock grab requirements, which is make |
561 | sure the spinlock operation is globally visible before any |
562 | subsequent memory operation. |
563 | |
564 | We can demonstrate this operation more clearly if we define |
565 | an abstract atomic operation: |
566 | |
567 | long cas(long *mem, long old, long new); |
568 | |
569 | "cas" stands for "compare and swap". It atomically: |
570 | |
571 | 1) Compares "old" with the value currently at "mem". |
572 | 2) If they are equal, "new" is written to "mem". |
573 | 3) Regardless, the current value at "mem" is returned. |
574 | |
575 | As an example usage, here is what an atomic counter update |
576 | might look like: |
577 | |
578 | void example_atomic_inc(long *counter) |
579 | { |
580 | long old, new, ret; |
581 | |
582 | while (1) { |
583 | old = *counter; |
584 | new = old + 1; |
585 | |
586 | ret = cas(counter, old, new); |
587 | if (ret == old) |
588 | break; |
589 | } |
590 | } |
591 | |
592 | Let's use cas() in order to build a pseudo-C atomic_dec_and_lock(): |
593 | |
594 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) |
595 | { |
596 | long old, new, ret; |
597 | int went_to_zero; |
598 | |
599 | went_to_zero = 0; |
600 | while (1) { |
601 | old = atomic_read(atomic); |
602 | new = old - 1; |
603 | if (new == 0) { |
604 | went_to_zero = 1; |
605 | spin_lock(lock); |
606 | } |
607 | ret = cas(atomic, old, new); |
608 | if (ret == old) |
609 | break; |
610 | if (went_to_zero) { |
611 | spin_unlock(lock); |
612 | went_to_zero = 0; |
613 | } |
614 | } |
615 | |
616 | return went_to_zero; |
617 | } |
618 | |
619 | Now, as far as memory barriers go, as long as spin_lock() |
620 | strictly orders all subsequent memory operations (including |
621 | the cas()) with respect to itself, things will be fine. |
622 | |
623 | Said another way, _atomic_dec_and_lock() must guarantee that |
624 | a counter dropping to zero is never made visible before the |
625 | spinlock being acquired. |
626 | |
627 | Note that this also means that for the case where the counter |
628 | is not dropping to zero, there are no memory ordering |
629 | requirements. |
630 |
Branches:
ben-wpan
ben-wpan-stefan
javiroman/ks7010
jz-2.6.34
jz-2.6.34-rc5
jz-2.6.34-rc6
jz-2.6.34-rc7
jz-2.6.35
jz-2.6.36
jz-2.6.37
jz-2.6.38
jz-2.6.39
jz-3.0
jz-3.1
jz-3.11
jz-3.12
jz-3.13
jz-3.15
jz-3.16
jz-3.18-dt
jz-3.2
jz-3.3
jz-3.4
jz-3.5
jz-3.6
jz-3.6-rc2-pwm
jz-3.9
jz-3.9-clk
jz-3.9-rc8
jz47xx
jz47xx-2.6.38
master
Tags:
od-2011-09-04
od-2011-09-18
v2.6.34-rc5
v2.6.34-rc6
v2.6.34-rc7
v3.9