Root/
Source at commit cdde9cf73945d547acd3e96f9508c79e84ad0bf1 created 12 years 9 months ago. By Maarten ter Huurne, MMC: JZ4740: Added support for CPU frequency changing | |
---|---|
1 | DMA attributes |
2 | ============== |
3 | |
4 | This document describes the semantics of the DMA attributes that are |
5 | defined in linux/dma-attrs.h. |
6 | |
7 | DMA_ATTR_WRITE_BARRIER |
8 | ---------------------- |
9 | |
10 | DMA_ATTR_WRITE_BARRIER is a (write) barrier attribute for DMA. DMA |
11 | to a memory region with the DMA_ATTR_WRITE_BARRIER attribute forces |
12 | all pending DMA writes to complete, and thus provides a mechanism to |
13 | strictly order DMA from a device across all intervening busses and |
14 | bridges. This barrier is not specific to a particular type of |
15 | interconnect, it applies to the system as a whole, and so its |
16 | implementation must account for the idiosyncracies of the system all |
17 | the way from the DMA device to memory. |
18 | |
19 | As an example of a situation where DMA_ATTR_WRITE_BARRIER would be |
20 | useful, suppose that a device does a DMA write to indicate that data is |
21 | ready and available in memory. The DMA of the "completion indication" |
22 | could race with data DMA. Mapping the memory used for completion |
23 | indications with DMA_ATTR_WRITE_BARRIER would prevent the race. |
24 | |
25 | DMA_ATTR_WEAK_ORDERING |
26 | ---------------------- |
27 | |
28 | DMA_ATTR_WEAK_ORDERING specifies that reads and writes to the mapping |
29 | may be weakly ordered, that is that reads and writes may pass each other. |
30 | |
31 | Since it is optional for platforms to implement DMA_ATTR_WEAK_ORDERING, |
32 | those that do not will simply ignore the attribute and exhibit default |
33 | behavior. |
34 | |
35 | DMA_ATTR_WRITE_COMBINE |
36 | ---------------------- |
37 | |
38 | DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be |
39 | buffered to improve performance. |
40 | |
41 | Since it is optional for platforms to implement DMA_ATTR_WRITE_COMBINE, |
42 | those that do not will simply ignore the attribute and exhibit default |
43 | behavior. |
44 | |
45 | DMA_ATTR_NON_CONSISTENT |
46 | ----------------------- |
47 | |
48 | DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either |
49 | consistent or non-consistent memory as it sees fit. By using this API, |
50 | you are guaranteeing to the platform that you have all the correct and |
51 | necessary sync points for this memory in the driver. |
52 | |
53 | DMA_ATTR_NO_KERNEL_MAPPING |
54 | -------------------------- |
55 | |
56 | DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel |
57 | virtual mapping for the allocated buffer. On some architectures creating |
58 | such mapping is non-trivial task and consumes very limited resources |
59 | (like kernel virtual address space or dma consistent address space). |
60 | Buffers allocated with this attribute can be only passed to user space |
61 | by calling dma_mmap_attrs(). By using this API, you are guaranteeing |
62 | that you won't dereference the pointer returned by dma_alloc_attr(). You |
63 | can threat it as a cookie that must be passed to dma_mmap_attrs() and |
64 | dma_free_attrs(). Make sure that both of these also get this attribute |
65 | set on each call. |
66 | |
67 | Since it is optional for platforms to implement |
68 | DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the |
69 | attribute and exhibit default behavior. |
70 | |
71 | DMA_ATTR_SKIP_CPU_SYNC |
72 | ---------------------- |
73 | |
74 | By default dma_map_{single,page,sg} functions family transfer a given |
75 | buffer from CPU domain to device domain. Some advanced use cases might |
76 | require sharing a buffer between more than one device. This requires |
77 | having a mapping created separately for each device and is usually |
78 | performed by calling dma_map_{single,page,sg} function more than once |
79 | for the given buffer with device pointer to each device taking part in |
80 | the buffer sharing. The first call transfers a buffer from 'CPU' domain |
81 | to 'device' domain, what synchronizes CPU caches for the given region |
82 | (usually it means that the cache has been flushed or invalidated |
83 | depending on the dma direction). However, next calls to |
84 | dma_map_{single,page,sg}() for other devices will perform exactly the |
85 | same sychronization operation on the CPU cache. CPU cache sychronization |
86 | might be a time consuming operation, especially if the buffers are |
87 | large, so it is highly recommended to avoid it if possible. |
88 | DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of |
89 | the CPU cache for the given buffer assuming that it has been already |
90 | transferred to 'device' domain. This attribute can be also used for |
91 | dma_unmap_{single,page,sg} functions family to force buffer to stay in |
92 | device domain after releasing a mapping for it. Use this attribute with |
93 | care! |
94 |
Branches:
ben-wpan
ben-wpan-stefan
javiroman/ks7010
jz-2.6.34
jz-2.6.34-rc5
jz-2.6.34-rc6
jz-2.6.34-rc7
jz-2.6.35
jz-2.6.36
jz-2.6.37
jz-2.6.38
jz-2.6.39
jz-3.0
jz-3.1
jz-3.11
jz-3.12
jz-3.13
jz-3.15
jz-3.16
jz-3.18-dt
jz-3.2
jz-3.3
jz-3.4
jz-3.5
jz-3.6
jz-3.6-rc2-pwm
jz-3.9
jz-3.9-clk
jz-3.9-rc8
jz47xx
jz47xx-2.6.38
master
Tags:
od-2011-09-04
od-2011-09-18
v2.6.34-rc5
v2.6.34-rc6
v2.6.34-rc7
v3.9