Root/
1 | Dynamic DMA mapping Guide |
2 | ========================= |
3 | |
4 | David S. Miller <davem@redhat.com> |
5 | Richard Henderson <rth@cygnus.com> |
6 | Jakub Jelinek <jakub@redhat.com> |
7 | |
8 | This is a guide to device driver writers on how to use the DMA API |
9 | with example pseudo-code. For a concise description of the API, see |
10 | DMA-API.txt. |
11 | |
12 | Most of the 64bit platforms have special hardware that translates bus |
13 | addresses (DMA addresses) into physical addresses. This is similar to |
14 | how page tables and/or a TLB translates virtual addresses to physical |
15 | addresses on a CPU. This is needed so that e.g. PCI devices can |
16 | access with a Single Address Cycle (32bit DMA address) any page in the |
17 | 64bit physical address space. Previously in Linux those 64bit |
18 | platforms had to set artificial limits on the maximum RAM size in the |
19 | system, so that the virt_to_bus() static scheme works (the DMA address |
20 | translation tables were simply filled on bootup to map each bus |
21 | address to the physical page __pa(bus_to_virt())). |
22 | |
23 | So that Linux can use the dynamic DMA mapping, it needs some help from the |
24 | drivers, namely it has to take into account that DMA addresses should be |
25 | mapped only for the time they are actually used and unmapped after the DMA |
26 | transfer. |
27 | |
28 | The following API will work of course even on platforms where no such |
29 | hardware exists. |
30 | |
31 | Note that the DMA API works with any bus independent of the underlying |
32 | microprocessor architecture. You should use the DMA API rather than |
33 | the bus specific DMA API (e.g. pci_dma_*). |
34 | |
35 | First of all, you should make sure |
36 | |
37 | #include <linux/dma-mapping.h> |
38 | |
39 | is in your driver. This file will obtain for you the definition of the |
40 | dma_addr_t (which can hold any valid DMA address for the platform) |
41 | type which should be used everywhere you hold a DMA (bus) address |
42 | returned from the DMA mapping functions. |
43 | |
44 | What memory is DMA'able? |
45 | |
46 | The first piece of information you must know is what kernel memory can |
47 | be used with the DMA mapping facilities. There has been an unwritten |
48 | set of rules regarding this, and this text is an attempt to finally |
49 | write them down. |
50 | |
51 | If you acquired your memory via the page allocator |
52 | (i.e. __get_free_page*()) or the generic memory allocators |
53 | (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from |
54 | that memory using the addresses returned from those routines. |
55 | |
56 | This means specifically that you may _not_ use the memory/addresses |
57 | returned from vmalloc() for DMA. It is possible to DMA to the |
58 | _underlying_ memory mapped into a vmalloc() area, but this requires |
59 | walking page tables to get the physical addresses, and then |
60 | translating each of those pages back to a kernel address using |
61 | something like __va(). [ EDIT: Update this when we integrate |
62 | Gerd Knorr's generic code which does this. ] |
63 | |
64 | This rule also means that you may use neither kernel image addresses |
65 | (items in data/text/bss segments), nor module image addresses, nor |
66 | stack addresses for DMA. These could all be mapped somewhere entirely |
67 | different than the rest of physical memory. Even if those classes of |
68 | memory could physically work with DMA, you'd need to ensure the I/O |
69 | buffers were cacheline-aligned. Without that, you'd see cacheline |
70 | sharing problems (data corruption) on CPUs with DMA-incoherent caches. |
71 | (The CPU could write to one word, DMA would write to a different one |
72 | in the same cache line, and one of them could be overwritten.) |
73 | |
74 | Also, this means that you cannot take the return of a kmap() |
75 | call and DMA to/from that. This is similar to vmalloc(). |
76 | |
77 | What about block I/O and networking buffers? The block I/O and |
78 | networking subsystems make sure that the buffers they use are valid |
79 | for you to DMA from/to. |
80 | |
81 | DMA addressing limitations |
82 | |
83 | Does your device have any DMA addressing limitations? For example, is |
84 | your device only capable of driving the low order 24-bits of address? |
85 | If so, you need to inform the kernel of this fact. |
86 | |
87 | By default, the kernel assumes that your device can address the full |
88 | 32-bits. For a 64-bit capable device, this needs to be increased. |
89 | And for a device with limitations, as discussed in the previous |
90 | paragraph, it needs to be decreased. |
91 | |
92 | Special note about PCI: PCI-X specification requires PCI-X devices to |
93 | support 64-bit addressing (DAC) for all transactions. And at least |
94 | one platform (SGI SN2) requires 64-bit consistent allocations to |
95 | operate correctly when the IO bus is in PCI-X mode. |
96 | |
97 | For correct operation, you must interrogate the kernel in your device |
98 | probe routine to see if the DMA controller on the machine can properly |
99 | support the DMA addressing limitation your device has. It is good |
100 | style to do this even if your device holds the default setting, |
101 | because this shows that you did think about these issues wrt. your |
102 | device. |
103 | |
104 | The query is performed via a call to dma_set_mask(): |
105 | |
106 | int dma_set_mask(struct device *dev, u64 mask); |
107 | |
108 | The query for consistent allocations is performed via a call to |
109 | dma_set_coherent_mask(): |
110 | |
111 | int dma_set_coherent_mask(struct device *dev, u64 mask); |
112 | |
113 | Here, dev is a pointer to the device struct of your device, and mask |
114 | is a bit mask describing which bits of an address your device |
115 | supports. It returns zero if your card can perform DMA properly on |
116 | the machine given the address mask you provided. In general, the |
117 | device struct of your device is embedded in the bus specific device |
118 | struct of your device. For example, a pointer to the device struct of |
119 | your PCI device is pdev->dev (pdev is a pointer to the PCI device |
120 | struct of your device). |
121 | |
122 | If it returns non-zero, your device cannot perform DMA properly on |
123 | this platform, and attempting to do so will result in undefined |
124 | behavior. You must either use a different mask, or not use DMA. |
125 | |
126 | This means that in the failure case, you have three options: |
127 | |
128 | 1) Use another DMA mask, if possible (see below). |
129 | 2) Use some non-DMA mode for data transfer, if possible. |
130 | 3) Ignore this device and do not initialize it. |
131 | |
132 | It is recommended that your driver print a kernel KERN_WARNING message |
133 | when you end up performing either #2 or #3. In this manner, if a user |
134 | of your driver reports that performance is bad or that the device is not |
135 | even detected, you can ask them for the kernel messages to find out |
136 | exactly why. |
137 | |
138 | The standard 32-bit addressing device would do something like this: |
139 | |
140 | if (dma_set_mask(dev, DMA_BIT_MASK(32))) { |
141 | printk(KERN_WARNING |
142 | "mydev: No suitable DMA available.\n"); |
143 | goto ignore_this_device; |
144 | } |
145 | |
146 | Another common scenario is a 64-bit capable device. The approach here |
147 | is to try for 64-bit addressing, but back down to a 32-bit mask that |
148 | should not fail. The kernel may fail the 64-bit mask not because the |
149 | platform is not capable of 64-bit addressing. Rather, it may fail in |
150 | this case simply because 32-bit addressing is done more efficiently |
151 | than 64-bit addressing. For example, Sparc64 PCI SAC addressing is |
152 | more efficient than DAC addressing. |
153 | |
154 | Here is how you would handle a 64-bit capable device which can drive |
155 | all 64-bits when accessing streaming DMA: |
156 | |
157 | int using_dac; |
158 | |
159 | if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { |
160 | using_dac = 1; |
161 | } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { |
162 | using_dac = 0; |
163 | } else { |
164 | printk(KERN_WARNING |
165 | "mydev: No suitable DMA available.\n"); |
166 | goto ignore_this_device; |
167 | } |
168 | |
169 | If a card is capable of using 64-bit consistent allocations as well, |
170 | the case would look like this: |
171 | |
172 | int using_dac, consistent_using_dac; |
173 | |
174 | if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { |
175 | using_dac = 1; |
176 | consistent_using_dac = 1; |
177 | dma_set_coherent_mask(dev, DMA_BIT_MASK(64)); |
178 | } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { |
179 | using_dac = 0; |
180 | consistent_using_dac = 0; |
181 | dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); |
182 | } else { |
183 | printk(KERN_WARNING |
184 | "mydev: No suitable DMA available.\n"); |
185 | goto ignore_this_device; |
186 | } |
187 | |
188 | dma_set_coherent_mask() will always be able to set the same or a |
189 | smaller mask as dma_set_mask(). However for the rare case that a |
190 | device driver only uses consistent allocations, one would have to |
191 | check the return value from dma_set_coherent_mask(). |
192 | |
193 | Finally, if your device can only drive the low 24-bits of |
194 | address you might do something like: |
195 | |
196 | if (dma_set_mask(dev, DMA_BIT_MASK(24))) { |
197 | printk(KERN_WARNING |
198 | "mydev: 24-bit DMA addressing not available.\n"); |
199 | goto ignore_this_device; |
200 | } |
201 | |
202 | When dma_set_mask() is successful, and returns zero, the kernel saves |
203 | away this mask you have provided. The kernel will use this |
204 | information later when you make DMA mappings. |
205 | |
206 | There is a case which we are aware of at this time, which is worth |
207 | mentioning in this documentation. If your device supports multiple |
208 | functions (for example a sound card provides playback and record |
209 | functions) and the various different functions have _different_ |
210 | DMA addressing limitations, you may wish to probe each mask and |
211 | only provide the functionality which the machine can handle. It |
212 | is important that the last call to dma_set_mask() be for the |
213 | most specific mask. |
214 | |
215 | Here is pseudo-code showing how this might be done: |
216 | |
217 | #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) |
218 | #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) |
219 | |
220 | struct my_sound_card *card; |
221 | struct device *dev; |
222 | |
223 | ... |
224 | if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) { |
225 | card->playback_enabled = 1; |
226 | } else { |
227 | card->playback_enabled = 0; |
228 | printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n", |
229 | card->name); |
230 | } |
231 | if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { |
232 | card->record_enabled = 1; |
233 | } else { |
234 | card->record_enabled = 0; |
235 | printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n", |
236 | card->name); |
237 | } |
238 | |
239 | A sound card was used as an example here because this genre of PCI |
240 | devices seems to be littered with ISA chips given a PCI front end, |
241 | and thus retaining the 16MB DMA addressing limitations of ISA. |
242 | |
243 | Types of DMA mappings |
244 | |
245 | There are two types of DMA mappings: |
246 | |
247 | - Consistent DMA mappings which are usually mapped at driver |
248 | initialization, unmapped at the end and for which the hardware should |
249 | guarantee that the device and the CPU can access the data |
250 | in parallel and will see updates made by each other without any |
251 | explicit software flushing. |
252 | |
253 | Think of "consistent" as "synchronous" or "coherent". |
254 | |
255 | The current default is to return consistent memory in the low 32 |
256 | bits of the bus space. However, for future compatibility you should |
257 | set the consistent mask even if this default is fine for your |
258 | driver. |
259 | |
260 | Good examples of what to use consistent mappings for are: |
261 | |
262 | - Network card DMA ring descriptors. |
263 | - SCSI adapter mailbox command data structures. |
264 | - Device firmware microcode executed out of |
265 | main memory. |
266 | |
267 | The invariant these examples all require is that any CPU store |
268 | to memory is immediately visible to the device, and vice |
269 | versa. Consistent mappings guarantee this. |
270 | |
271 | IMPORTANT: Consistent DMA memory does not preclude the usage of |
272 | proper memory barriers. The CPU may reorder stores to |
273 | consistent memory just as it may normal memory. Example: |
274 | if it is important for the device to see the first word |
275 | of a descriptor updated before the second, you must do |
276 | something like: |
277 | |
278 | desc->word0 = address; |
279 | wmb(); |
280 | desc->word1 = DESC_VALID; |
281 | |
282 | in order to get correct behavior on all platforms. |
283 | |
284 | Also, on some platforms your driver may need to flush CPU write |
285 | buffers in much the same way as it needs to flush write buffers |
286 | found in PCI bridges (such as by reading a register's value |
287 | after writing it). |
288 | |
289 | - Streaming DMA mappings which are usually mapped for one DMA |
290 | transfer, unmapped right after it (unless you use dma_sync_* below) |
291 | and for which hardware can optimize for sequential accesses. |
292 | |
293 | This of "streaming" as "asynchronous" or "outside the coherency |
294 | domain". |
295 | |
296 | Good examples of what to use streaming mappings for are: |
297 | |
298 | - Networking buffers transmitted/received by a device. |
299 | - Filesystem buffers written/read by a SCSI device. |
300 | |
301 | The interfaces for using this type of mapping were designed in |
302 | such a way that an implementation can make whatever performance |
303 | optimizations the hardware allows. To this end, when using |
304 | such mappings you must be explicit about what you want to happen. |
305 | |
306 | Neither type of DMA mapping has alignment restrictions that come from |
307 | the underlying bus, although some devices may have such restrictions. |
308 | Also, systems with caches that aren't DMA-coherent will work better |
309 | when the underlying buffers don't share cache lines with other data. |
310 | |
311 | |
312 | Using Consistent DMA mappings. |
313 | |
314 | To allocate and map large (PAGE_SIZE or so) consistent DMA regions, |
315 | you should do: |
316 | |
317 | dma_addr_t dma_handle; |
318 | |
319 | cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); |
320 | |
321 | where device is a struct device *. This may be called in interrupt |
322 | context with the GFP_ATOMIC flag. |
323 | |
324 | Size is the length of the region you want to allocate, in bytes. |
325 | |
326 | This routine will allocate RAM for that region, so it acts similarly to |
327 | __get_free_pages (but takes size instead of a page order). If your |
328 | driver needs regions sized smaller than a page, you may prefer using |
329 | the dma_pool interface, described below. |
330 | |
331 | The consistent DMA mapping interfaces, for non-NULL dev, will by |
332 | default return a DMA address which is 32-bit addressable. Even if the |
333 | device indicates (via DMA mask) that it may address the upper 32-bits, |
334 | consistent allocation will only return > 32-bit addresses for DMA if |
335 | the consistent DMA mask has been explicitly changed via |
336 | dma_set_coherent_mask(). This is true of the dma_pool interface as |
337 | well. |
338 | |
339 | dma_alloc_coherent returns two values: the virtual address which you |
340 | can use to access it from the CPU and dma_handle which you pass to the |
341 | card. |
342 | |
343 | The cpu return address and the DMA bus master address are both |
344 | guaranteed to be aligned to the smallest PAGE_SIZE order which |
345 | is greater than or equal to the requested size. This invariant |
346 | exists (for example) to guarantee that if you allocate a chunk |
347 | which is smaller than or equal to 64 kilobytes, the extent of the |
348 | buffer you receive will not cross a 64K boundary. |
349 | |
350 | To unmap and free such a DMA region, you call: |
351 | |
352 | dma_free_coherent(dev, size, cpu_addr, dma_handle); |
353 | |
354 | where dev, size are the same as in the above call and cpu_addr and |
355 | dma_handle are the values dma_alloc_coherent returned to you. |
356 | This function may not be called in interrupt context. |
357 | |
358 | If your driver needs lots of smaller memory regions, you can write |
359 | custom code to subdivide pages returned by dma_alloc_coherent, |
360 | or you can use the dma_pool API to do that. A dma_pool is like |
361 | a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages. |
362 | Also, it understands common hardware constraints for alignment, |
363 | like queue heads needing to be aligned on N byte boundaries. |
364 | |
365 | Create a dma_pool like this: |
366 | |
367 | struct dma_pool *pool; |
368 | |
369 | pool = dma_pool_create(name, dev, size, align, alloc); |
370 | |
371 | The "name" is for diagnostics (like a kmem_cache name); dev and size |
372 | are as above. The device's hardware alignment requirement for this |
373 | type of data is "align" (which is expressed in bytes, and must be a |
374 | power of two). If your device has no boundary crossing restrictions, |
375 | pass 0 for alloc; passing 4096 says memory allocated from this pool |
376 | must not cross 4KByte boundaries (but at that time it may be better to |
377 | go for dma_alloc_coherent directly instead). |
378 | |
379 | Allocate memory from a dma pool like this: |
380 | |
381 | cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); |
382 | |
383 | flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor |
384 | holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent, |
385 | this returns two values, cpu_addr and dma_handle. |
386 | |
387 | Free memory that was allocated from a dma_pool like this: |
388 | |
389 | dma_pool_free(pool, cpu_addr, dma_handle); |
390 | |
391 | where pool is what you passed to dma_pool_alloc, and cpu_addr and |
392 | dma_handle are the values dma_pool_alloc returned. This function |
393 | may be called in interrupt context. |
394 | |
395 | Destroy a dma_pool by calling: |
396 | |
397 | dma_pool_destroy(pool); |
398 | |
399 | Make sure you've called dma_pool_free for all memory allocated |
400 | from a pool before you destroy the pool. This function may not |
401 | be called in interrupt context. |
402 | |
403 | DMA Direction |
404 | |
405 | The interfaces described in subsequent portions of this document |
406 | take a DMA direction argument, which is an integer and takes on |
407 | one of the following values: |
408 | |
409 | DMA_BIDIRECTIONAL |
410 | DMA_TO_DEVICE |
411 | DMA_FROM_DEVICE |
412 | DMA_NONE |
413 | |
414 | One should provide the exact DMA direction if you know it. |
415 | |
416 | DMA_TO_DEVICE means "from main memory to the device" |
417 | DMA_FROM_DEVICE means "from the device to main memory" |
418 | It is the direction in which the data moves during the DMA |
419 | transfer. |
420 | |
421 | You are _strongly_ encouraged to specify this as precisely |
422 | as you possibly can. |
423 | |
424 | If you absolutely cannot know the direction of the DMA transfer, |
425 | specify DMA_BIDIRECTIONAL. It means that the DMA can go in |
426 | either direction. The platform guarantees that you may legally |
427 | specify this, and that it will work, but this may be at the |
428 | cost of performance for example. |
429 | |
430 | The value DMA_NONE is to be used for debugging. One can |
431 | hold this in a data structure before you come to know the |
432 | precise direction, and this will help catch cases where your |
433 | direction tracking logic has failed to set things up properly. |
434 | |
435 | Another advantage of specifying this value precisely (outside of |
436 | potential platform-specific optimizations of such) is for debugging. |
437 | Some platforms actually have a write permission boolean which DMA |
438 | mappings can be marked with, much like page protections in the user |
439 | program address space. Such platforms can and do report errors in the |
440 | kernel logs when the DMA controller hardware detects violation of the |
441 | permission setting. |
442 | |
443 | Only streaming mappings specify a direction, consistent mappings |
444 | implicitly have a direction attribute setting of |
445 | DMA_BIDIRECTIONAL. |
446 | |
447 | The SCSI subsystem tells you the direction to use in the |
448 | 'sc_data_direction' member of the SCSI command your driver is |
449 | working on. |
450 | |
451 | For Networking drivers, it's a rather simple affair. For transmit |
452 | packets, map/unmap them with the DMA_TO_DEVICE direction |
453 | specifier. For receive packets, just the opposite, map/unmap them |
454 | with the DMA_FROM_DEVICE direction specifier. |
455 | |
456 | Using Streaming DMA mappings |
457 | |
458 | The streaming DMA mapping routines can be called from interrupt |
459 | context. There are two versions of each map/unmap, one which will |
460 | map/unmap a single memory region, and one which will map/unmap a |
461 | scatterlist. |
462 | |
463 | To map a single region, you do: |
464 | |
465 | struct device *dev = &my_dev->dev; |
466 | dma_addr_t dma_handle; |
467 | void *addr = buffer->ptr; |
468 | size_t size = buffer->len; |
469 | |
470 | dma_handle = dma_map_single(dev, addr, size, direction); |
471 | |
472 | and to unmap it: |
473 | |
474 | dma_unmap_single(dev, dma_handle, size, direction); |
475 | |
476 | You should call dma_unmap_single when the DMA activity is finished, e.g. |
477 | from the interrupt which told you that the DMA transfer is done. |
478 | |
479 | Using cpu pointers like this for single mappings has a disadvantage, |
480 | you cannot reference HIGHMEM memory in this way. Thus, there is a |
481 | map/unmap interface pair akin to dma_{map,unmap}_single. These |
482 | interfaces deal with page/offset pairs instead of cpu pointers. |
483 | Specifically: |
484 | |
485 | struct device *dev = &my_dev->dev; |
486 | dma_addr_t dma_handle; |
487 | struct page *page = buffer->page; |
488 | unsigned long offset = buffer->offset; |
489 | size_t size = buffer->len; |
490 | |
491 | dma_handle = dma_map_page(dev, page, offset, size, direction); |
492 | |
493 | ... |
494 | |
495 | dma_unmap_page(dev, dma_handle, size, direction); |
496 | |
497 | Here, "offset" means byte offset within the given page. |
498 | |
499 | With scatterlists, you map a region gathered from several regions by: |
500 | |
501 | int i, count = dma_map_sg(dev, sglist, nents, direction); |
502 | struct scatterlist *sg; |
503 | |
504 | for_each_sg(sglist, sg, count, i) { |
505 | hw_address[i] = sg_dma_address(sg); |
506 | hw_len[i] = sg_dma_len(sg); |
507 | } |
508 | |
509 | where nents is the number of entries in the sglist. |
510 | |
511 | The implementation is free to merge several consecutive sglist entries |
512 | into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any |
513 | consecutive sglist entries can be merged into one provided the first one |
514 | ends and the second one starts on a page boundary - in fact this is a huge |
515 | advantage for cards which either cannot do scatter-gather or have very |
516 | limited number of scatter-gather entries) and returns the actual number |
517 | of sg entries it mapped them to. On failure 0 is returned. |
518 | |
519 | Then you should loop count times (note: this can be less than nents times) |
520 | and use sg_dma_address() and sg_dma_len() macros where you previously |
521 | accessed sg->address and sg->length as shown above. |
522 | |
523 | To unmap a scatterlist, just call: |
524 | |
525 | dma_unmap_sg(dev, sglist, nents, direction); |
526 | |
527 | Again, make sure DMA activity has already finished. |
528 | |
529 | PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be |
530 | the _same_ one you passed into the dma_map_sg call, |
531 | it should _NOT_ be the 'count' value _returned_ from the |
532 | dma_map_sg call. |
533 | |
534 | Every dma_map_{single,sg} call should have its dma_unmap_{single,sg} |
535 | counterpart, because the bus address space is a shared resource (although |
536 | in some ports the mapping is per each BUS so less devices contend for the |
537 | same bus address space) and you could render the machine unusable by eating |
538 | all bus addresses. |
539 | |
540 | If you need to use the same streaming DMA region multiple times and touch |
541 | the data in between the DMA transfers, the buffer needs to be synced |
542 | properly in order for the cpu and device to see the most uptodate and |
543 | correct copy of the DMA buffer. |
544 | |
545 | So, firstly, just map it with dma_map_{single,sg}, and after each DMA |
546 | transfer call either: |
547 | |
548 | dma_sync_single_for_cpu(dev, dma_handle, size, direction); |
549 | |
550 | or: |
551 | |
552 | dma_sync_sg_for_cpu(dev, sglist, nents, direction); |
553 | |
554 | as appropriate. |
555 | |
556 | Then, if you wish to let the device get at the DMA area again, |
557 | finish accessing the data with the cpu, and then before actually |
558 | giving the buffer to the hardware call either: |
559 | |
560 | dma_sync_single_for_device(dev, dma_handle, size, direction); |
561 | |
562 | or: |
563 | |
564 | dma_sync_sg_for_device(dev, sglist, nents, direction); |
565 | |
566 | as appropriate. |
567 | |
568 | After the last DMA transfer call one of the DMA unmap routines |
569 | dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_* |
570 | call till dma_unmap_*, then you don't have to call the dma_sync_* |
571 | routines at all. |
572 | |
573 | Here is pseudo code which shows a situation in which you would need |
574 | to use the dma_sync_*() interfaces. |
575 | |
576 | my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) |
577 | { |
578 | dma_addr_t mapping; |
579 | |
580 | mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); |
581 | |
582 | cp->rx_buf = buffer; |
583 | cp->rx_len = len; |
584 | cp->rx_dma = mapping; |
585 | |
586 | give_rx_buf_to_card(cp); |
587 | } |
588 | |
589 | ... |
590 | |
591 | my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs) |
592 | { |
593 | struct my_card *cp = devid; |
594 | |
595 | ... |
596 | if (read_card_status(cp) == RX_BUF_TRANSFERRED) { |
597 | struct my_card_header *hp; |
598 | |
599 | /* Examine the header to see if we wish |
600 | * to accept the data. But synchronize |
601 | * the DMA transfer with the CPU first |
602 | * so that we see updated contents. |
603 | */ |
604 | dma_sync_single_for_cpu(&cp->dev, cp->rx_dma, |
605 | cp->rx_len, |
606 | DMA_FROM_DEVICE); |
607 | |
608 | /* Now it is safe to examine the buffer. */ |
609 | hp = (struct my_card_header *) cp->rx_buf; |
610 | if (header_is_ok(hp)) { |
611 | dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len, |
612 | DMA_FROM_DEVICE); |
613 | pass_to_upper_layers(cp->rx_buf); |
614 | make_and_setup_new_rx_buf(cp); |
615 | } else { |
616 | /* Just sync the buffer and give it back |
617 | * to the card. |
618 | */ |
619 | dma_sync_single_for_device(&cp->dev, |
620 | cp->rx_dma, |
621 | cp->rx_len, |
622 | DMA_FROM_DEVICE); |
623 | give_rx_buf_to_card(cp); |
624 | } |
625 | } |
626 | } |
627 | |
628 | Drivers converted fully to this interface should not use virt_to_bus any |
629 | longer, nor should they use bus_to_virt. Some drivers have to be changed a |
630 | little bit, because there is no longer an equivalent to bus_to_virt in the |
631 | dynamic DMA mapping scheme - you have to always store the DMA addresses |
632 | returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single |
633 | calls (dma_map_sg stores them in the scatterlist itself if the platform |
634 | supports dynamic DMA mapping in hardware) in your driver structures and/or |
635 | in the card registers. |
636 | |
637 | All drivers should be using these interfaces with no exceptions. It |
638 | is planned to completely remove virt_to_bus() and bus_to_virt() as |
639 | they are entirely deprecated. Some ports already do not provide these |
640 | as it is impossible to correctly support them. |
641 | |
642 | Optimizing Unmap State Space Consumption |
643 | |
644 | On many platforms, dma_unmap_{single,page}() is simply a nop. |
645 | Therefore, keeping track of the mapping address and length is a waste |
646 | of space. Instead of filling your drivers up with ifdefs and the like |
647 | to "work around" this (which would defeat the whole purpose of a |
648 | portable API) the following facilities are provided. |
649 | |
650 | Actually, instead of describing the macros one by one, we'll |
651 | transform some example code. |
652 | |
653 | 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. |
654 | Example, before: |
655 | |
656 | struct ring_state { |
657 | struct sk_buff *skb; |
658 | dma_addr_t mapping; |
659 | __u32 len; |
660 | }; |
661 | |
662 | after: |
663 | |
664 | struct ring_state { |
665 | struct sk_buff *skb; |
666 | DEFINE_DMA_UNMAP_ADDR(mapping); |
667 | DEFINE_DMA_UNMAP_LEN(len); |
668 | }; |
669 | |
670 | 2) Use dma_unmap_{addr,len}_set to set these values. |
671 | Example, before: |
672 | |
673 | ringp->mapping = FOO; |
674 | ringp->len = BAR; |
675 | |
676 | after: |
677 | |
678 | dma_unmap_addr_set(ringp, mapping, FOO); |
679 | dma_unmap_len_set(ringp, len, BAR); |
680 | |
681 | 3) Use dma_unmap_{addr,len} to access these values. |
682 | Example, before: |
683 | |
684 | dma_unmap_single(dev, ringp->mapping, ringp->len, |
685 | DMA_FROM_DEVICE); |
686 | |
687 | after: |
688 | |
689 | dma_unmap_single(dev, |
690 | dma_unmap_addr(ringp, mapping), |
691 | dma_unmap_len(ringp, len), |
692 | DMA_FROM_DEVICE); |
693 | |
694 | It really should be self-explanatory. We treat the ADDR and LEN |
695 | separately, because it is possible for an implementation to only |
696 | need the address in order to perform the unmap operation. |
697 | |
698 | Platform Issues |
699 | |
700 | If you are just writing drivers for Linux and do not maintain |
701 | an architecture port for the kernel, you can safely skip down |
702 | to "Closing". |
703 | |
704 | 1) Struct scatterlist requirements. |
705 | |
706 | Struct scatterlist must contain, at a minimum, the following |
707 | members: |
708 | |
709 | struct page *page; |
710 | unsigned int offset; |
711 | unsigned int length; |
712 | |
713 | The base address is specified by a "page+offset" pair. |
714 | |
715 | Previous versions of struct scatterlist contained a "void *address" |
716 | field that was sometimes used instead of page+offset. As of Linux |
717 | 2.5., page+offset is always used, and the "address" field has been |
718 | deleted. |
719 | |
720 | 2) More to come... |
721 | |
722 | Handling Errors |
723 | |
724 | DMA address space is limited on some architectures and an allocation |
725 | failure can be determined by: |
726 | |
727 | - checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0 |
728 | |
729 | - checking the returned dma_addr_t of dma_map_single and dma_map_page |
730 | by using dma_mapping_error(): |
731 | |
732 | dma_addr_t dma_handle; |
733 | |
734 | dma_handle = dma_map_single(dev, addr, size, direction); |
735 | if (dma_mapping_error(dev, dma_handle)) { |
736 | /* |
737 | * reduce current DMA mapping usage, |
738 | * delay and try again later or |
739 | * reset driver. |
740 | */ |
741 | } |
742 | |
743 | Closing |
744 | |
745 | This document, and the API itself, would not be in it's current |
746 | form without the feedback and suggestions from numerous individuals. |
747 | We would like to specifically mention, in no particular order, the |
748 | following people: |
749 | |
750 | Russell King <rmk@arm.linux.org.uk> |
751 | Leo Dagum <dagum@barrel.engr.sgi.com> |
752 | Ralf Baechle <ralf@oss.sgi.com> |
753 | Grant Grundler <grundler@cup.hp.com> |
754 | Jay Estabrook <Jay.Estabrook@compaq.com> |
755 | Thomas Sailer <sailer@ife.ee.ethz.ch> |
756 | Andrea Arcangeli <andrea@suse.de> |
757 | Jens Axboe <jens.axboe@oracle.com> |
758 | David Mosberger-Tang <davidm@hpl.hp.com> |
759 |
Branches:
ben-wpan
ben-wpan-stefan
javiroman/ks7010
jz-2.6.34
jz-2.6.34-rc5
jz-2.6.34-rc6
jz-2.6.34-rc7
jz-2.6.35
jz-2.6.36
jz-2.6.37
jz-2.6.38
jz-2.6.39
jz-3.0
jz-3.1
jz-3.11
jz-3.12
jz-3.13
jz-3.15
jz-3.16
jz-3.18-dt
jz-3.2
jz-3.3
jz-3.4
jz-3.5
jz-3.6
jz-3.6-rc2-pwm
jz-3.9
jz-3.9-clk
jz-3.9-rc8
jz47xx
jz47xx-2.6.38
master
Tags:
od-2011-09-04
od-2011-09-18
v2.6.34-rc5
v2.6.34-rc6
v2.6.34-rc7
v3.9