Root/
1 | = Transparent Hugepage Support = |
2 | |
3 | == Objective == |
4 | |
5 | Performance critical computing applications dealing with large memory |
6 | working sets are already running on top of libhugetlbfs and in turn |
7 | hugetlbfs. Transparent Hugepage Support is an alternative means of |
8 | using huge pages for the backing of virtual memory with huge pages |
9 | that supports the automatic promotion and demotion of page sizes and |
10 | without the shortcomings of hugetlbfs. |
11 | |
12 | Currently it only works for anonymous memory mappings but in the |
13 | future it can expand over the pagecache layer starting with tmpfs. |
14 | |
15 | The reason applications are running faster is because of two |
16 | factors. The first factor is almost completely irrelevant and it's not |
17 | of significant interest because it'll also have the downside of |
18 | requiring larger clear-page copy-page in page faults which is a |
19 | potentially negative effect. The first factor consists in taking a |
20 | single page fault for each 2M virtual region touched by userland (so |
21 | reducing the enter/exit kernel frequency by a 512 times factor). This |
22 | only matters the first time the memory is accessed for the lifetime of |
23 | a memory mapping. The second long lasting and much more important |
24 | factor will affect all subsequent accesses to the memory for the whole |
25 | runtime of the application. The second factor consist of two |
26 | components: 1) the TLB miss will run faster (especially with |
27 | virtualization using nested pagetables but almost always also on bare |
28 | metal without virtualization) and 2) a single TLB entry will be |
29 | mapping a much larger amount of virtual memory in turn reducing the |
30 | number of TLB misses. With virtualization and nested pagetables the |
31 | TLB can be mapped of larger size only if both KVM and the Linux guest |
32 | are using hugepages but a significant speedup already happens if only |
33 | one of the two is using hugepages just because of the fact the TLB |
34 | miss is going to run faster. |
35 | |
36 | == Design == |
37 | |
38 | - "graceful fallback": mm components which don't have transparent |
39 | hugepage knowledge fall back to breaking a transparent hugepage and |
40 | working on the regular pages and their respective regular pmd/pte |
41 | mappings |
42 | |
43 | - if a hugepage allocation fails because of memory fragmentation, |
44 | regular pages should be gracefully allocated instead and mixed in |
45 | the same vma without any failure or significant delay and without |
46 | userland noticing |
47 | |
48 | - if some task quits and more hugepages become available (either |
49 | immediately in the buddy or through the VM), guest physical memory |
50 | backed by regular pages should be relocated on hugepages |
51 | automatically (with khugepaged) |
52 | |
53 | - it doesn't require memory reservation and in turn it uses hugepages |
54 | whenever possible (the only possible reservation here is kernelcore= |
55 | to avoid unmovable pages to fragment all the memory but such a tweak |
56 | is not specific to transparent hugepage support and it's a generic |
57 | feature that applies to all dynamic high order allocations in the |
58 | kernel) |
59 | |
60 | - this initial support only offers the feature in the anonymous memory |
61 | regions but it'd be ideal to move it to tmpfs and the pagecache |
62 | later |
63 | |
64 | Transparent Hugepage Support maximizes the usefulness of free memory |
65 | if compared to the reservation approach of hugetlbfs by allowing all |
66 | unused memory to be used as cache or other movable (or even unmovable |
67 | entities). It doesn't require reservation to prevent hugepage |
68 | allocation failures to be noticeable from userland. It allows paging |
69 | and all other advanced VM features to be available on the |
70 | hugepages. It requires no modifications for applications to take |
71 | advantage of it. |
72 | |
73 | Applications however can be further optimized to take advantage of |
74 | this feature, like for example they've been optimized before to avoid |
75 | a flood of mmap system calls for every malloc(4k). Optimizing userland |
76 | is by far not mandatory and khugepaged already can take care of long |
77 | lived page allocations even for hugepage unaware applications that |
78 | deals with large amounts of memory. |
79 | |
80 | In certain cases when hugepages are enabled system wide, application |
81 | may end up allocating more memory resources. An application may mmap a |
82 | large region but only touch 1 byte of it, in that case a 2M page might |
83 | be allocated instead of a 4k page for no good. This is why it's |
84 | possible to disable hugepages system-wide and to only have them inside |
85 | MADV_HUGEPAGE madvise regions. |
86 | |
87 | Embedded systems should enable hugepages only inside madvise regions |
88 | to eliminate any risk of wasting any precious byte of memory and to |
89 | only run faster. |
90 | |
91 | Applications that gets a lot of benefit from hugepages and that don't |
92 | risk to lose memory by using hugepages, should use |
93 | madvise(MADV_HUGEPAGE) on their critical mmapped regions. |
94 | |
95 | == sysfs == |
96 | |
97 | Transparent Hugepage Support can be entirely disabled (mostly for |
98 | debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to |
99 | avoid the risk of consuming more memory resources) or enabled system |
100 | wide. This can be achieved with one of: |
101 | |
102 | echo always >/sys/kernel/mm/transparent_hugepage/enabled |
103 | echo madvise >/sys/kernel/mm/transparent_hugepage/enabled |
104 | echo never >/sys/kernel/mm/transparent_hugepage/enabled |
105 | |
106 | It's also possible to limit defrag efforts in the VM to generate |
107 | hugepages in case they're not immediately free to madvise regions or |
108 | to never try to defrag memory and simply fallback to regular pages |
109 | unless hugepages are immediately available. Clearly if we spend CPU |
110 | time to defrag memory, we would expect to gain even more by the fact |
111 | we use hugepages later instead of regular pages. This isn't always |
112 | guaranteed, but it may be more likely in case the allocation is for a |
113 | MADV_HUGEPAGE region. |
114 | |
115 | echo always >/sys/kernel/mm/transparent_hugepage/defrag |
116 | echo madvise >/sys/kernel/mm/transparent_hugepage/defrag |
117 | echo never >/sys/kernel/mm/transparent_hugepage/defrag |
118 | |
119 | khugepaged will be automatically started when |
120 | transparent_hugepage/enabled is set to "always" or "madvise, and it'll |
121 | be automatically shutdown if it's set to "never". |
122 | |
123 | khugepaged runs usually at low frequency so while one may not want to |
124 | invoke defrag algorithms synchronously during the page faults, it |
125 | should be worth invoking defrag at least in khugepaged. However it's |
126 | also possible to disable defrag in khugepaged by writing 0 or enable |
127 | defrag in khugepaged by writing 1: |
128 | |
129 | echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag |
130 | echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag |
131 | |
132 | You can also control how many pages khugepaged should scan at each |
133 | pass: |
134 | |
135 | /sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan |
136 | |
137 | and how many milliseconds to wait in khugepaged between each pass (you |
138 | can set this to 0 to run khugepaged at 100% utilization of one core): |
139 | |
140 | /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs |
141 | |
142 | and how many milliseconds to wait in khugepaged if there's an hugepage |
143 | allocation failure to throttle the next allocation attempt. |
144 | |
145 | /sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs |
146 | |
147 | The khugepaged progress can be seen in the number of pages collapsed: |
148 | |
149 | /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed |
150 | |
151 | for each pass: |
152 | |
153 | /sys/kernel/mm/transparent_hugepage/khugepaged/full_scans |
154 | |
155 | == Boot parameter == |
156 | |
157 | You can change the sysfs boot time defaults of Transparent Hugepage |
158 | Support by passing the parameter "transparent_hugepage=always" or |
159 | "transparent_hugepage=madvise" or "transparent_hugepage=never" |
160 | (without "") to the kernel command line. |
161 | |
162 | == Need of application restart == |
163 | |
164 | The transparent_hugepage/enabled values only affect future |
165 | behavior. So to make them effective you need to restart any |
166 | application that could have been using hugepages. This also applies to |
167 | the regions registered in khugepaged. |
168 | |
169 | == get_user_pages and follow_page == |
170 | |
171 | get_user_pages and follow_page if run on a hugepage, will return the |
172 | head or tail pages as usual (exactly as they would do on |
173 | hugetlbfs). Most gup users will only care about the actual physical |
174 | address of the page and its temporary pinning to release after the I/O |
175 | is complete, so they won't ever notice the fact the page is huge. But |
176 | if any driver is going to mangle over the page structure of the tail |
177 | page (like for checking page->mapping or other bits that are relevant |
178 | for the head page and not the tail page), it should be updated to jump |
179 | to check head page instead (while serializing properly against |
180 | split_huge_page() to avoid the head and tail pages to disappear from |
181 | under it, see the futex code to see an example of that, hugetlbfs also |
182 | needed special handling in futex code for similar reasons). |
183 | |
184 | NOTE: these aren't new constraints to the GUP API, and they match the |
185 | same constrains that applies to hugetlbfs too, so any driver capable |
186 | of handling GUP on hugetlbfs will also work fine on transparent |
187 | hugepage backed mappings. |
188 | |
189 | In case you can't handle compound pages if they're returned by |
190 | follow_page, the FOLL_SPLIT bit can be specified as parameter to |
191 | follow_page, so that it will split the hugepages before returning |
192 | them. Migration for example passes FOLL_SPLIT as parameter to |
193 | follow_page because it's not hugepage aware and in fact it can't work |
194 | at all on hugetlbfs (but it instead works fine on transparent |
195 | hugepages thanks to FOLL_SPLIT). migration simply can't deal with |
196 | hugepages being returned (as it's not only checking the pfn of the |
197 | page and pinning it during the copy but it pretends to migrate the |
198 | memory in regular page sizes and with regular pte/pmd mappings). |
199 | |
200 | == Optimizing the applications == |
201 | |
202 | To be guaranteed that the kernel will map a 2M page immediately in any |
203 | memory region, the mmap region has to be hugepage naturally |
204 | aligned. posix_memalign() can provide that guarantee. |
205 | |
206 | == Hugetlbfs == |
207 | |
208 | You can use hugetlbfs on a kernel that has transparent hugepage |
209 | support enabled just fine as always. No difference can be noted in |
210 | hugetlbfs other than there will be less overall fragmentation. All |
211 | usual features belonging to hugetlbfs are preserved and |
212 | unaffected. libhugetlbfs will also work fine as usual. |
213 | |
214 | == Graceful fallback == |
215 | |
216 | Code walking pagetables but unware about huge pmds can simply call |
217 | split_huge_page_pmd(mm, pmd) where the pmd is the one returned by |
218 | pmd_offset. It's trivial to make the code transparent hugepage aware |
219 | by just grepping for "pmd_offset" and adding split_huge_page_pmd where |
220 | missing after pmd_offset returns the pmd. Thanks to the graceful |
221 | fallback design, with a one liner change, you can avoid to write |
222 | hundred if not thousand of lines of complex code to make your code |
223 | hugepage aware. |
224 | |
225 | If you're not walking pagetables but you run into a physical hugepage |
226 | but you can't handle it natively in your code, you can split it by |
227 | calling split_huge_page(page). This is what the Linux VM does before |
228 | it tries to swapout the hugepage for example. |
229 | |
230 | Example to make mremap.c transparent hugepage aware with a one liner |
231 | change: |
232 | |
233 | diff --git a/mm/mremap.c b/mm/mremap.c |
234 | --- a/mm/mremap.c |
235 | +++ b/mm/mremap.c |
236 | @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru |
237 | return NULL; |
238 | |
239 | pmd = pmd_offset(pud, addr); |
240 | + split_huge_page_pmd(mm, pmd); |
241 | if (pmd_none_or_clear_bad(pmd)) |
242 | return NULL; |
243 | |
244 | == Locking in hugepage aware code == |
245 | |
246 | We want as much code as possible hugepage aware, as calling |
247 | split_huge_page() or split_huge_page_pmd() has a cost. |
248 | |
249 | To make pagetable walks huge pmd aware, all you need to do is to call |
250 | pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the |
251 | mmap_sem in read (or write) mode to be sure an huge pmd cannot be |
252 | created from under you by khugepaged (khugepaged collapse_huge_page |
253 | takes the mmap_sem in write mode in addition to the anon_vma lock). If |
254 | pmd_trans_huge returns false, you just fallback in the old code |
255 | paths. If instead pmd_trans_huge returns true, you have to take the |
256 | mm->page_table_lock and re-run pmd_trans_huge. Taking the |
257 | page_table_lock will prevent the huge pmd to be converted into a |
258 | regular pmd from under you (split_huge_page can run in parallel to the |
259 | pagetable walk). If the second pmd_trans_huge returns false, you |
260 | should just drop the page_table_lock and fallback to the old code as |
261 | before. Otherwise you should run pmd_trans_splitting on the pmd. In |
262 | case pmd_trans_splitting returns true, it means split_huge_page is |
263 | already in the middle of splitting the page. So if pmd_trans_splitting |
264 | returns true it's enough to drop the page_table_lock and call |
265 | wait_split_huge_page and then fallback the old code paths. You are |
266 | guaranteed by the time wait_split_huge_page returns, the pmd isn't |
267 | huge anymore. If pmd_trans_splitting returns false, you can proceed to |
268 | process the huge pmd and the hugepage natively. Once finished you can |
269 | drop the page_table_lock. |
270 | |
271 | == compound_lock, get_user_pages and put_page == |
272 | |
273 | split_huge_page internally has to distribute the refcounts in the head |
274 | page to the tail pages before clearing all PG_head/tail bits from the |
275 | page structures. It can do that easily for refcounts taken by huge pmd |
276 | mappings. But the GUI API as created by hugetlbfs (that returns head |
277 | and tail pages if running get_user_pages on an address backed by any |
278 | hugepage), requires the refcount to be accounted on the tail pages and |
279 | not only in the head pages, if we want to be able to run |
280 | split_huge_page while there are gup pins established on any tail |
281 | page. Failure to be able to run split_huge_page if there's any gup pin |
282 | on any tail page, would mean having to split all hugepages upfront in |
283 | get_user_pages which is unacceptable as too many gup users are |
284 | performance critical and they must work natively on hugepages like |
285 | they work natively on hugetlbfs already (hugetlbfs is simpler because |
286 | hugetlbfs pages cannot be splitted so there wouldn't be requirement of |
287 | accounting the pins on the tail pages for hugetlbfs). If we wouldn't |
288 | account the gup refcounts on the tail pages during gup, we won't know |
289 | anymore which tail page is pinned by gup and which is not while we run |
290 | split_huge_page. But we still have to add the gup pin to the head page |
291 | too, to know when we can free the compound page in case it's never |
292 | splitted during its lifetime. That requires changing not just |
293 | get_page, but put_page as well so that when put_page runs on a tail |
294 | page (and only on a tail page) it will find its respective head page, |
295 | and then it will decrease the head page refcount in addition to the |
296 | tail page refcount. To obtain a head page reliably and to decrease its |
297 | refcount without race conditions, put_page has to serialize against |
298 | __split_huge_page_refcount using a special per-page lock called |
299 | compound_lock. |
300 |
Branches:
ben-wpan
ben-wpan-stefan
javiroman/ks7010
jz-2.6.34
jz-2.6.34-rc5
jz-2.6.34-rc6
jz-2.6.34-rc7
jz-2.6.35
jz-2.6.36
jz-2.6.37
jz-2.6.38
jz-2.6.39
jz-3.0
jz-3.1
jz-3.11
jz-3.12
jz-3.13
jz-3.15
jz-3.16
jz-3.18-dt
jz-3.2
jz-3.3
jz-3.4
jz-3.5
jz-3.6
jz-3.6-rc2-pwm
jz-3.9
jz-3.9-clk
jz-3.9-rc8
jz47xx
jz47xx-2.6.38
master
Tags:
od-2011-09-04
od-2011-09-18
v2.6.34-rc5
v2.6.34-rc6
v2.6.34-rc7
v3.9