Root/Documentation/DMA-API.txt

1               Dynamic DMA mapping using the generic device
2               ============================================
3
4        James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
5
6This document describes the DMA API. For a more gentle introduction
7of the API (and actual examples) see
8Documentation/DMA-API-HOWTO.txt.
9
10This API is split into two pieces. Part I describes the API. Part II
11describes the extensions to the API for supporting non-consistent
12memory machines. Unless you know that your driver absolutely has to
13support non-consistent platforms (this is usually only legacy
14platforms) you should only use the API described in part I.
15
16Part I - dma_ API
17-------------------------------------
18
19To get the dma_ API, you must #include <linux/dma-mapping.h>
20
21
22Part Ia - Using large dma-coherent buffers
23------------------------------------------
24
25void *
26dma_alloc_coherent(struct device *dev, size_t size,
27                 dma_addr_t *dma_handle, gfp_t flag)
28
29Consistent memory is memory for which a write by either the device or
30the processor can immediately be read by the processor or device
31without having to worry about caching effects. (You may however need
32to make sure to flush the processor's write buffers before telling
33devices to read that memory.)
34
35This routine allocates a region of <size> bytes of consistent memory.
36It also returns a <dma_handle> which may be cast to an unsigned
37integer the same width as the bus and used as the physical address
38base of the region.
39
40Returns: a pointer to the allocated region (in the processor's virtual
41address space) or NULL if the allocation failed.
42
43Note: consistent memory can be expensive on some platforms, and the
44minimum allocation length may be as big as a page, so you should
45consolidate your requests for consistent memory as much as possible.
46The simplest way to do that is to use the dma_pool calls (see below).
47
48The flag parameter (dma_alloc_coherent only) allows the caller to
49specify the GFP_ flags (see kmalloc) for the allocation (the
50implementation may choose to ignore flags that affect the location of
51the returned memory, like GFP_DMA).
52
53void
54dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
55               dma_addr_t dma_handle)
56
57Free the region of consistent memory you previously allocated. dev,
58size and dma_handle must all be the same as those passed into the
59consistent allocate. cpu_addr must be the virtual address returned by
60the consistent allocate.
61
62Note that unlike their sibling allocation calls, these routines
63may only be called with IRQs enabled.
64
65
66Part Ib - Using small dma-coherent buffers
67------------------------------------------
68
69To get this part of the dma_ API, you must #include <linux/dmapool.h>
70
71Many drivers need lots of small dma-coherent memory regions for DMA
72descriptors or I/O buffers. Rather than allocating in units of a page
73or more using dma_alloc_coherent(), you can use DMA pools. These work
74much like a struct kmem_cache, except that they use the dma-coherent allocator,
75not __get_free_pages(). Also, they understand common hardware constraints
76for alignment, like queue heads needing to be aligned on N-byte boundaries.
77
78
79    struct dma_pool *
80    dma_pool_create(const char *name, struct device *dev,
81            size_t size, size_t align, size_t alloc);
82
83The pool create() routines initialize a pool of dma-coherent buffers
84for use with a given device. It must be called in a context which
85can sleep.
86
87The "name" is for diagnostics (like a struct kmem_cache name); dev and size
88are like what you'd pass to dma_alloc_coherent(). The device's hardware
89alignment requirement for this type of data is "align" (which is expressed
90in bytes, and must be a power of two). If your device has no boundary
91crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
92from this pool must not cross 4KByte boundaries.
93
94
95    void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
96            dma_addr_t *dma_handle);
97
98This allocates memory from the pool; the returned memory will meet the size
99and alignment requirements specified at creation time. Pass GFP_ATOMIC to
100prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
101pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns
102two values: an address usable by the cpu, and the dma address usable by the
103pool's device.
104
105
106    void dma_pool_free(struct dma_pool *pool, void *vaddr,
107            dma_addr_t addr);
108
109This puts memory back into the pool. The pool is what was passed to
110the pool allocation routine; the cpu (vaddr) and dma addresses are what
111were returned when that routine allocated the memory being freed.
112
113
114    void dma_pool_destroy(struct dma_pool *pool);
115
116The pool destroy() routines free the resources of the pool. They must be
117called in a context which can sleep. Make sure you've freed all allocated
118memory back to the pool before you destroy it.
119
120
121Part Ic - DMA addressing limitations
122------------------------------------
123
124int
125dma_supported(struct device *dev, u64 mask)
126
127Checks to see if the device can support DMA to the memory described by
128mask.
129
130Returns: 1 if it can and 0 if it can't.
131
132Notes: This routine merely tests to see if the mask is possible. It
133won't change the current mask settings. It is more intended as an
134internal API for use by the platform than an external API for use by
135driver writers.
136
137int
138dma_set_mask(struct device *dev, u64 mask)
139
140Checks to see if the mask is possible and updates the device
141parameters if it is.
142
143Returns: 0 if successful and a negative error if not.
144
145int
146dma_set_coherent_mask(struct device *dev, u64 mask)
147
148Checks to see if the mask is possible and updates the device
149parameters if it is.
150
151Returns: 0 if successful and a negative error if not.
152
153u64
154dma_get_required_mask(struct device *dev)
155
156This API returns the mask that the platform requires to
157operate efficiently. Usually this means the returned mask
158is the minimum required to cover all of memory. Examining the
159required mask gives drivers with variable descriptor sizes the
160opportunity to use smaller descriptors as necessary.
161
162Requesting the required mask does not alter the current mask. If you
163wish to take advantage of it, you should issue a dma_set_mask()
164call to set the mask to the value returned.
165
166
167Part Id - Streaming DMA mappings
168--------------------------------
169
170dma_addr_t
171dma_map_single(struct device *dev, void *cpu_addr, size_t size,
172              enum dma_data_direction direction)
173
174Maps a piece of processor virtual memory so it can be accessed by the
175device and returns the physical handle of the memory.
176
177The direction for both api's may be converted freely by casting.
178However the dma_ API uses a strongly typed enumerator for its
179direction:
180
181DMA_NONE no direction (used for debugging)
182DMA_TO_DEVICE data is going from the memory to the device
183DMA_FROM_DEVICE data is coming from the device to the memory
184DMA_BIDIRECTIONAL direction isn't known
185
186Notes: Not all memory regions in a machine can be mapped by this
187API. Further, regions that appear to be physically contiguous in
188kernel virtual space may not be contiguous as physical memory. Since
189this API does not provide any scatter/gather capability, it will fail
190if the user tries to map a non-physically contiguous piece of memory.
191For this reason, it is recommended that memory mapped by this API be
192obtained only from sources which guarantee it to be physically contiguous
193(like kmalloc).
194
195Further, the physical address of the memory must be within the
196dma_mask of the device (the dma_mask represents a bit mask of the
197addressable region for the device. I.e., if the physical address of
198the memory anded with the dma_mask is still equal to the physical
199address, then the device can perform DMA to the memory). In order to
200ensure that the memory allocated by kmalloc is within the dma_mask,
201the driver may specify various platform-dependent flags to restrict
202the physical memory range of the allocation (e.g. on x86, GFP_DMA
203guarantees to be within the first 16Mb of available physical memory,
204as required by ISA devices).
205
206Note also that the above constraints on physical contiguity and
207dma_mask may not apply if the platform has an IOMMU (a device which
208supplies a physical to virtual mapping between the I/O memory bus and
209the device). However, to be portable, device driver writers may *not*
210assume that such an IOMMU exists.
211
212Warnings: Memory coherency operates at a granularity called the cache
213line width. In order for memory mapped by this API to operate
214correctly, the mapped region must begin exactly on a cache line
215boundary and end exactly on one (to prevent two separately mapped
216regions from sharing a single cache line). Since the cache line size
217may not be known at compile time, the API will not enforce this
218requirement. Therefore, it is recommended that driver writers who
219don't take special care to determine the cache line size at run time
220only map virtual regions that begin and end on page boundaries (which
221are guaranteed also to be cache line boundaries).
222
223DMA_TO_DEVICE synchronisation must be done after the last modification
224of the memory region by the software and before it is handed off to
225the driver. Once this primitive is used, memory covered by this
226primitive should be treated as read-only by the device. If the device
227may write to it at any point, it should be DMA_BIDIRECTIONAL (see
228below).
229
230DMA_FROM_DEVICE synchronisation must be done before the driver
231accesses data that may be changed by the device. This memory should
232be treated as read-only by the driver. If the driver needs to write
233to it at any point, it should be DMA_BIDIRECTIONAL (see below).
234
235DMA_BIDIRECTIONAL requires special handling: it means that the driver
236isn't sure if the memory was modified before being handed off to the
237device and also isn't sure if the device will also modify it. Thus,
238you must always sync bidirectional memory twice: once before the
239memory is handed off to the device (to make sure all memory changes
240are flushed from the processor) and once before the data may be
241accessed after being used by the device (to make sure any processor
242cache lines are updated with data that the device may have changed).
243
244void
245dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
246         enum dma_data_direction direction)
247
248Unmaps the region previously mapped. All the parameters passed in
249must be identical to those passed in (and returned) by the mapping
250API.
251
252dma_addr_t
253dma_map_page(struct device *dev, struct page *page,
254            unsigned long offset, size_t size,
255            enum dma_data_direction direction)
256void
257dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
258           enum dma_data_direction direction)
259
260API for mapping and unmapping for pages. All the notes and warnings
261for the other mapping APIs apply here. Also, although the <offset>
262and <size> parameters are provided to do partial page mapping, it is
263recommended that you never use these unless you really know what the
264cache width is.
265
266int
267dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
268
269In some circumstances dma_map_single and dma_map_page will fail to create
270a mapping. A driver can check for these errors by testing the returned
271dma address with dma_mapping_error(). A non-zero return value means the mapping
272could not be created and the driver should take appropriate action (e.g.
273reduce current DMA mapping usage or delay and try again later).
274
275    int
276    dma_map_sg(struct device *dev, struct scatterlist *sg,
277        int nents, enum dma_data_direction direction)
278
279Returns: the number of physical segments mapped (this may be shorter
280than <nents> passed in if some elements of the scatter/gather list are
281physically or virtually adjacent and an IOMMU maps them with a single
282entry).
283
284Please note that the sg cannot be mapped again if it has been mapped once.
285The mapping process is allowed to destroy information in the sg.
286
287As with the other mapping interfaces, dma_map_sg can fail. When it
288does, 0 is returned and a driver must take appropriate action. It is
289critical that the driver do something, in the case of a block driver
290aborting the request or even oopsing is better than doing nothing and
291corrupting the filesystem.
292
293With scatterlists, you use the resulting mapping like this:
294
295    int i, count = dma_map_sg(dev, sglist, nents, direction);
296    struct scatterlist *sg;
297
298    for_each_sg(sglist, sg, count, i) {
299        hw_address[i] = sg_dma_address(sg);
300        hw_len[i] = sg_dma_len(sg);
301    }
302
303where nents is the number of entries in the sglist.
304
305The implementation is free to merge several consecutive sglist entries
306into one (e.g. with an IOMMU, or if several pages just happen to be
307physically contiguous) and returns the actual number of sg entries it
308mapped them to. On failure 0, is returned.
309
310Then you should loop count times (note: this can be less than nents times)
311and use sg_dma_address() and sg_dma_len() macros where you previously
312accessed sg->address and sg->length as shown above.
313
314    void
315    dma_unmap_sg(struct device *dev, struct scatterlist *sg,
316        int nhwentries, enum dma_data_direction direction)
317
318Unmap the previously mapped scatter/gather list. All the parameters
319must be the same as those and passed in to the scatter/gather mapping
320API.
321
322Note: <nents> must be the number you passed in, *not* the number of
323physical entries returned.
324
325void
326dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
327            enum dma_data_direction direction)
328void
329dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
330               enum dma_data_direction direction)
331void
332dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
333            enum dma_data_direction direction)
334void
335dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
336               enum dma_data_direction direction)
337
338Synchronise a single contiguous or scatter/gather mapping for the cpu
339and device. With the sync_sg API, all the parameters must be the same
340as those passed into the single mapping API. With the sync_single API,
341you can use dma_handle and size parameters that aren't identical to
342those passed into the single mapping API to do a partial sync.
343
344Notes: You must do this:
345
346- Before reading values that have been written by DMA from the device
347  (use the DMA_FROM_DEVICE direction)
348- After writing values that will be written to the device using DMA
349  (use the DMA_TO_DEVICE) direction
350- before *and* after handing memory to the device if the memory is
351  DMA_BIDIRECTIONAL
352
353See also dma_map_single().
354
355dma_addr_t
356dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
357             enum dma_data_direction dir,
358             struct dma_attrs *attrs)
359
360void
361dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
362               size_t size, enum dma_data_direction dir,
363               struct dma_attrs *attrs)
364
365int
366dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
367         int nents, enum dma_data_direction dir,
368         struct dma_attrs *attrs)
369
370void
371dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
372           int nents, enum dma_data_direction dir,
373           struct dma_attrs *attrs)
374
375The four functions above are just like the counterpart functions
376without the _attrs suffixes, except that they pass an optional
377struct dma_attrs*.
378
379struct dma_attrs encapsulates a set of "dma attributes". For the
380definition of struct dma_attrs see linux/dma-attrs.h.
381
382The interpretation of dma attributes is architecture-specific, and
383each attribute should be documented in Documentation/DMA-attributes.txt.
384
385If struct dma_attrs* is NULL, the semantics of each of these
386functions is identical to those of the corresponding function
387without the _attrs suffix. As a result dma_map_single_attrs()
388can generally replace dma_map_single(), etc.
389
390As an example of the use of the *_attrs functions, here's how
391you could pass an attribute DMA_ATTR_FOO when mapping memory
392for DMA:
393
394#include <linux/dma-attrs.h>
395/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
396 * documented in Documentation/DMA-attributes.txt */
397...
398
399    DEFINE_DMA_ATTRS(attrs);
400    dma_set_attr(DMA_ATTR_FOO, &attrs);
401    ....
402    n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
403    ....
404
405Architectures that care about DMA_ATTR_FOO would check for its
406presence in their implementations of the mapping and unmapping
407routines, e.g.:
408
409void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
410                 size_t size, enum dma_data_direction dir,
411                 struct dma_attrs *attrs)
412{
413    ....
414    int foo = dma_get_attr(DMA_ATTR_FOO, attrs);
415    ....
416    if (foo)
417        /* twizzle the frobnozzle */
418    ....
419
420
421Part II - Advanced dma_ usage
422-----------------------------
423
424Warning: These pieces of the DMA API should not be used in the
425majority of cases, since they cater for unlikely corner cases that
426don't belong in usual drivers.
427
428If you don't understand how cache line coherency works between a
429processor and an I/O device, you should not be using this part of the
430API at all.
431
432void *
433dma_alloc_noncoherent(struct device *dev, size_t size,
434                   dma_addr_t *dma_handle, gfp_t flag)
435
436Identical to dma_alloc_coherent() except that the platform will
437choose to return either consistent or non-consistent memory as it sees
438fit. By using this API, you are guaranteeing to the platform that you
439have all the correct and necessary sync points for this memory in the
440driver should it choose to return non-consistent memory.
441
442Note: where the platform can return consistent memory, it will
443guarantee that the sync points become nops.
444
445Warning: Handling non-consistent memory is a real pain. You should
446only ever use this API if you positively know your driver will be
447required to work on one of the rare (usually non-PCI) architectures
448that simply cannot make consistent memory.
449
450void
451dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
452                  dma_addr_t dma_handle)
453
454Free memory allocated by the nonconsistent API. All parameters must
455be identical to those passed in (and returned by
456dma_alloc_noncoherent()).
457
458int
459dma_is_consistent(struct device *dev, dma_addr_t dma_handle)
460
461Returns true if the device dev is performing consistent DMA on the memory
462area pointed to by the dma_handle.
463
464int
465dma_get_cache_alignment(void)
466
467Returns the processor cache alignment. This is the absolute minimum
468alignment *and* width that you must observe when either mapping
469memory or doing partial flushes.
470
471Notes: This API may return a number *larger* than the actual cache
472line, but it will guarantee that one or more cache lines fit exactly
473into the width returned by this call. It will also always be a power
474of two for easy alignment.
475
476void
477dma_cache_sync(struct device *dev, void *vaddr, size_t size,
478           enum dma_data_direction direction)
479
480Do a partial sync of memory that was allocated by
481dma_alloc_noncoherent(), starting at virtual address vaddr and
482continuing on for size. Again, you *must* observe the cache line
483boundaries when doing this.
484
485int
486dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
487                dma_addr_t device_addr, size_t size, int
488                flags)
489
490Declare region of memory to be handed out by dma_alloc_coherent when
491it's asked for coherent memory for this device.
492
493bus_addr is the physical address to which the memory is currently
494assigned in the bus responding region (this will be used by the
495platform to perform the mapping).
496
497device_addr is the physical address the device needs to be programmed
498with actually to address this memory (this will be handed out as the
499dma_addr_t in dma_alloc_coherent()).
500
501size is the size of the area (must be multiples of PAGE_SIZE).
502
503flags can be or'd together and are:
504
505DMA_MEMORY_MAP - request that the memory returned from
506dma_alloc_coherent() be directly writable.
507
508DMA_MEMORY_IO - request that the memory returned from
509dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
510
511One or both of these flags must be present.
512
513DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
514dma_alloc_coherent of any child devices of this one (for memory residing
515on a bridge).
516
517DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
518Do not allow dma_alloc_coherent() to fall back to system memory when
519it's out of memory in the declared region.
520
521The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
522must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
523if only DMA_MEMORY_MAP were passed in) for success or zero for
524failure.
525
526Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
527dma_alloc_coherent() may no longer be accessed directly, but instead
528must be accessed using the correct bus functions. If your driver
529isn't prepared to handle this contingency, it should not specify
530DMA_MEMORY_IO in the input flags.
531
532As a simplification for the platforms, only *one* such region of
533memory may be declared per device.
534
535For reasons of efficiency, most platforms choose to track the declared
536region only at the granularity of a page. For smaller allocations,
537you should use the dma_pool() API.
538
539void
540dma_release_declared_memory(struct device *dev)
541
542Remove the memory region previously declared from the system. This
543API performs *no* in-use checking for this region and will return
544unconditionally having removed all the required structures. It is the
545driver's job to ensure that no parts of this memory region are
546currently in use.
547
548void *
549dma_mark_declared_memory_occupied(struct device *dev,
550                  dma_addr_t device_addr, size_t size)
551
552This is used to occupy specific regions of the declared space
553(dma_alloc_coherent() will hand out the first free region it finds).
554
555device_addr is the *device* address of the region requested.
556
557size is the size (and should be a page-sized multiple).
558
559The return value will be either a pointer to the processor virtual
560address of the memory, or an error (via PTR_ERR()) if any part of the
561region is occupied.
562
563Part III - Debug drivers use of the DMA-API
564-------------------------------------------
565
566The DMA-API as described above as some constraints. DMA addresses must be
567released with the corresponding function with the same size for example. With
568the advent of hardware IOMMUs it becomes more and more important that drivers
569do not violate those constraints. In the worst case such a violation can
570result in data corruption up to destroyed filesystems.
571
572To debug drivers and find bugs in the usage of the DMA-API checking code can
573be compiled into the kernel which will tell the developer about those
574violations. If your architecture supports it you can select the "Enable
575debugging of DMA-API usage" option in your kernel configuration. Enabling this
576option has a performance impact. Do not enable it in production kernels.
577
578If you boot the resulting kernel will contain code which does some bookkeeping
579about what DMA memory was allocated for which device. If this code detects an
580error it prints a warning message with some details into your kernel log. An
581example warning message may look like this:
582
583------------[ cut here ]------------
584WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
585    check_unmap+0x203/0x490()
586Hardware name:
587forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
588    function [device address=0x00000000640444be] [size=66 bytes] [mapped as
589single] [unmapped as page]
590Modules linked in: nfsd exportfs bridge stp llc r8169
591Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
592Call Trace:
593 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
594 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
595 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
596 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
597 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
598 [<ffffffff80252f96>] queue_work+0x56/0x60
599 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
600 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
601 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
602 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
603 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
604 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
605 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
606 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
607 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
608 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
609 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
610 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
611 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
612 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
613
614The driver developer can find the driver and the device including a stacktrace
615of the DMA-API call which caused this warning.
616
617Per default only the first error will result in a warning message. All other
618errors will only silently counted. This limitation exist to prevent the code
619from flooding your kernel log. To support debugging a device driver this can
620be disabled via debugfs. See the debugfs interface documentation below for
621details.
622
623The debugfs directory for the DMA-API debugging code is called dma-api/. In
624this directory the following files can currently be found:
625
626    dma-api/all_errors This file contains a numeric value. If this
627                value is not equal to zero the debugging code
628                will print a warning for every error it finds
629                into the kernel log. Be careful with this
630                option, as it can easily flood your logs.
631
632    dma-api/disabled This read-only file contains the character 'Y'
633                if the debugging code is disabled. This can
634                happen when it runs out of memory or if it was
635                disabled at boot time
636
637    dma-api/error_count This file is read-only and shows the total
638                numbers of errors found.
639
640    dma-api/num_errors The number in this file shows how many
641                warnings will be printed to the kernel log
642                before it stops. This number is initialized to
643                one at system boot and be set by writing into
644                this file
645
646    dma-api/min_free_entries
647                This read-only file can be read to get the
648                minimum number of free dma_debug_entries the
649                allocator has ever seen. If this value goes
650                down to zero the code will disable itself
651                because it is not longer reliable.
652
653    dma-api/num_free_entries
654                The current number of free dma_debug_entries
655                in the allocator.
656
657    dma-api/driver-filter
658                You can write a name of a driver into this file
659                to limit the debug output to requests from that
660                particular driver. Write an empty string to
661                that file to disable the filter and see
662                all errors again.
663
664If you have this code compiled into your kernel it will be enabled by default.
665If you want to boot without the bookkeeping anyway you can provide
666'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
667Notice that you can not enable it again at runtime. You have to reboot to do
668so.
669
670If you want to see debug messages only for a special device driver you can
671specify the dma_debug_driver=<drivername> parameter. This will enable the
672driver filter at boot time. The debug code will only print errors for that
673driver afterwards. This filter can be disabled or changed later using debugfs.
674
675When the code disables itself at runtime this is most likely because it ran
676out of dma_debug_entries. These entries are preallocated at boot. The number
677of preallocated entries is defined per architecture. If it is too low for you
678boot with 'dma_debug_entries=<your_desired_number>' to overwrite the
679architectural default.
680

Archive Download this file



interactive