Root/
1 | |
2 | The intent of this file is to give a brief summary of hugetlbpage support in |
3 | the Linux kernel. This support is built on top of multiple page size support |
4 | that is provided by most modern architectures. For example, i386 |
5 | architecture supports 4K and 4M (2M in PAE mode) page sizes, ia64 |
6 | architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M, |
7 | 256M and ppc64 supports 4K and 16M. A TLB is a cache of virtual-to-physical |
8 | translations. Typically this is a very scarce resource on processor. |
9 | Operating systems try to make best use of limited number of TLB resources. |
10 | This optimization is more critical now as bigger and bigger physical memories |
11 | (several GBs) are more readily available. |
12 | |
13 | Users can use the huge page support in Linux kernel by either using the mmap |
14 | system call or standard SYSV shared memory system calls (shmget, shmat). |
15 | |
16 | First the Linux kernel needs to be built with the CONFIG_HUGETLBFS |
17 | (present under "File systems") and CONFIG_HUGETLB_PAGE (selected |
18 | automatically when CONFIG_HUGETLBFS is selected) configuration |
19 | options. |
20 | |
21 | The /proc/meminfo file provides information about the total number of |
22 | persistent hugetlb pages in the kernel's huge page pool. It also displays |
23 | information about the number of free, reserved and surplus huge pages and the |
24 | default huge page size. The huge page size is needed for generating the |
25 | proper alignment and size of the arguments to system calls that map huge page |
26 | regions. |
27 | |
28 | The output of "cat /proc/meminfo" will include lines like: |
29 | |
30 | ..... |
31 | HugePages_Total: vvv |
32 | HugePages_Free: www |
33 | HugePages_Rsvd: xxx |
34 | HugePages_Surp: yyy |
35 | Hugepagesize: zzz kB |
36 | |
37 | where: |
38 | HugePages_Total is the size of the pool of huge pages. |
39 | HugePages_Free is the number of huge pages in the pool that are not yet |
40 | allocated. |
41 | HugePages_Rsvd is short for "reserved," and is the number of huge pages for |
42 | which a commitment to allocate from the pool has been made, |
43 | but no allocation has yet been made. Reserved huge pages |
44 | guarantee that an application will be able to allocate a |
45 | huge page from the pool of huge pages at fault time. |
46 | HugePages_Surp is short for "surplus," and is the number of huge pages in |
47 | the pool above the value in /proc/sys/vm/nr_hugepages. The |
48 | maximum number of surplus huge pages is controlled by |
49 | /proc/sys/vm/nr_overcommit_hugepages. |
50 | |
51 | /proc/filesystems should also show a filesystem of type "hugetlbfs" configured |
52 | in the kernel. |
53 | |
54 | /proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge |
55 | pages in the kernel's huge page pool. "Persistent" huge pages will be |
56 | returned to the huge page pool when freed by a task. A user with root |
57 | privileges can dynamically allocate more or free some persistent huge pages |
58 | by increasing or decreasing the value of 'nr_hugepages'. |
59 | |
60 | Pages that are used as huge pages are reserved inside the kernel and cannot |
61 | be used for other purposes. Huge pages cannot be swapped out under |
62 | memory pressure. |
63 | |
64 | Once a number of huge pages have been pre-allocated to the kernel huge page |
65 | pool, a user with appropriate privilege can use either the mmap system call |
66 | or shared memory system calls to use the huge pages. See the discussion of |
67 | Using Huge Pages, below. |
68 | |
69 | The administrator can allocate persistent huge pages on the kernel boot |
70 | command line by specifying the "hugepages=N" parameter, where 'N' = the |
71 | number of huge pages requested. This is the most reliable method of |
72 | allocating huge pages as memory has not yet become fragmented. |
73 | |
74 | Some platforms support multiple huge page sizes. To allocate huge pages |
75 | of a specific size, one must precede the huge pages boot command parameters |
76 | with a huge page size selection parameter "hugepagesz=<size>". <size> must |
77 | be specified in bytes with optional scale suffix [kKmMgG]. The default huge |
78 | page size may be selected with the "default_hugepagesz=<size>" boot parameter. |
79 | |
80 | When multiple huge page sizes are supported, /proc/sys/vm/nr_hugepages |
81 | indicates the current number of pre-allocated huge pages of the default size. |
82 | Thus, one can use the following command to dynamically allocate/deallocate |
83 | default sized persistent huge pages: |
84 | |
85 | echo 20 > /proc/sys/vm/nr_hugepages |
86 | |
87 | This command will try to adjust the number of default sized huge pages in the |
88 | huge page pool to 20, allocating or freeing huge pages, as required. |
89 | |
90 | On a NUMA platform, the kernel will attempt to distribute the huge page pool |
91 | over all the set of allowed nodes specified by the NUMA memory policy of the |
92 | task that modifies nr_hugepages. The default for the allowed nodes--when the |
93 | task has default memory policy--is all on-line nodes with memory. Allowed |
94 | nodes with insufficient available, contiguous memory for a huge page will be |
95 | silently skipped when allocating persistent huge pages. See the discussion |
96 | below of the interaction of task memory policy, cpusets and per node attributes |
97 | with the allocation and freeing of persistent huge pages. |
98 | |
99 | The success or failure of huge page allocation depends on the amount of |
100 | physically contiguous memory that is present in system at the time of the |
101 | allocation attempt. If the kernel is unable to allocate huge pages from |
102 | some nodes in a NUMA system, it will attempt to make up the difference by |
103 | allocating extra pages on other nodes with sufficient available contiguous |
104 | memory, if any. |
105 | |
106 | System administrators may want to put this command in one of the local rc |
107 | init files. This will enable the kernel to allocate huge pages early in |
108 | the boot process when the possibility of getting physical contiguous pages |
109 | is still very high. Administrators can verify the number of huge pages |
110 | actually allocated by checking the sysctl or meminfo. To check the per node |
111 | distribution of huge pages in a NUMA system, use: |
112 | |
113 | cat /sys/devices/system/node/node*/meminfo | fgrep Huge |
114 | |
115 | /proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of |
116 | huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are |
117 | requested by applications. Writing any non-zero value into this file |
118 | indicates that the hugetlb subsystem is allowed to try to obtain that |
119 | number of "surplus" huge pages from the kernel's normal page pool, when the |
120 | persistent huge page pool is exhausted. As these surplus huge pages become |
121 | unused, they are freed back to the kernel's normal page pool. |
122 | |
123 | When increasing the huge page pool size via nr_hugepages, any existing surplus |
124 | pages will first be promoted to persistent huge pages. Then, additional |
125 | huge pages will be allocated, if necessary and if possible, to fulfill |
126 | the new persistent huge page pool size. |
127 | |
128 | The administrator may shrink the pool of persistent huge pages for |
129 | the default huge page size by setting the nr_hugepages sysctl to a |
130 | smaller value. The kernel will attempt to balance the freeing of huge pages |
131 | across all nodes in the memory policy of the task modifying nr_hugepages. |
132 | Any free huge pages on the selected nodes will be freed back to the kernel's |
133 | normal page pool. |
134 | |
135 | Caveat: Shrinking the persistent huge page pool via nr_hugepages such that |
136 | it becomes less than the number of huge pages in use will convert the balance |
137 | of the in-use huge pages to surplus huge pages. This will occur even if |
138 | the number of surplus pages it would exceed the overcommit value. As long as |
139 | this condition holds--that is, until nr_hugepages+nr_overcommit_hugepages is |
140 | increased sufficiently, or the surplus huge pages go out of use and are freed-- |
141 | no more surplus huge pages will be allowed to be allocated. |
142 | |
143 | With support for multiple huge page pools at run-time available, much of |
144 | the huge page userspace interface in /proc/sys/vm has been duplicated in sysfs. |
145 | The /proc interfaces discussed above have been retained for backwards |
146 | compatibility. The root huge page control directory in sysfs is: |
147 | |
148 | /sys/kernel/mm/hugepages |
149 | |
150 | For each huge page size supported by the running kernel, a subdirectory |
151 | will exist, of the form: |
152 | |
153 | hugepages-${size}kB |
154 | |
155 | Inside each of these directories, the same set of files will exist: |
156 | |
157 | nr_hugepages |
158 | nr_hugepages_mempolicy |
159 | nr_overcommit_hugepages |
160 | free_hugepages |
161 | resv_hugepages |
162 | surplus_hugepages |
163 | |
164 | which function as described above for the default huge page-sized case. |
165 | |
166 | |
167 | Interaction of Task Memory Policy with Huge Page Allocation/Freeing |
168 | |
169 | Whether huge pages are allocated and freed via the /proc interface or |
170 | the /sysfs interface using the nr_hugepages_mempolicy attribute, the NUMA |
171 | nodes from which huge pages are allocated or freed are controlled by the |
172 | NUMA memory policy of the task that modifies the nr_hugepages_mempolicy |
173 | sysctl or attribute. When the nr_hugepages attribute is used, mempolicy |
174 | is ignored. |
175 | |
176 | The recommended method to allocate or free huge pages to/from the kernel |
177 | huge page pool, using the nr_hugepages example above, is: |
178 | |
179 | numactl --interleave <node-list> echo 20 \ |
180 | >/proc/sys/vm/nr_hugepages_mempolicy |
181 | |
182 | or, more succinctly: |
183 | |
184 | numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy |
185 | |
186 | This will allocate or free abs(20 - nr_hugepages) to or from the nodes |
187 | specified in <node-list>, depending on whether number of persistent huge pages |
188 | is initially less than or greater than 20, respectively. No huge pages will be |
189 | allocated nor freed on any node not included in the specified <node-list>. |
190 | |
191 | When adjusting the persistent hugepage count via nr_hugepages_mempolicy, any |
192 | memory policy mode--bind, preferred, local or interleave--may be used. The |
193 | resulting effect on persistent huge page allocation is as follows: |
194 | |
195 | 1) Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.txt], |
196 | persistent huge pages will be distributed across the node or nodes |
197 | specified in the mempolicy as if "interleave" had been specified. |
198 | However, if a node in the policy does not contain sufficient contiguous |
199 | memory for a huge page, the allocation will not "fallback" to the nearest |
200 | neighbor node with sufficient contiguous memory. To do this would cause |
201 | undesirable imbalance in the distribution of the huge page pool, or |
202 | possibly, allocation of persistent huge pages on nodes not allowed by |
203 | the task's memory policy. |
204 | |
205 | 2) One or more nodes may be specified with the bind or interleave policy. |
206 | If more than one node is specified with the preferred policy, only the |
207 | lowest numeric id will be used. Local policy will select the node where |
208 | the task is running at the time the nodes_allowed mask is constructed. |
209 | For local policy to be deterministic, the task must be bound to a cpu or |
210 | cpus in a single node. Otherwise, the task could be migrated to some |
211 | other node at any time after launch and the resulting node will be |
212 | indeterminate. Thus, local policy is not very useful for this purpose. |
213 | Any of the other mempolicy modes may be used to specify a single node. |
214 | |
215 | 3) The nodes allowed mask will be derived from any non-default task mempolicy, |
216 | whether this policy was set explicitly by the task itself or one of its |
217 | ancestors, such as numactl. This means that if the task is invoked from a |
218 | shell with non-default policy, that policy will be used. One can specify a |
219 | node list of "all" with numactl --interleave or --membind [-m] to achieve |
220 | interleaving over all nodes in the system or cpuset. |
221 | |
222 | 4) Any task mempolicy specifed--e.g., using numactl--will be constrained by |
223 | the resource limits of any cpuset in which the task runs. Thus, there will |
224 | be no way for a task with non-default policy running in a cpuset with a |
225 | subset of the system nodes to allocate huge pages outside the cpuset |
226 | without first moving to a cpuset that contains all of the desired nodes. |
227 | |
228 | 5) Boot-time huge page allocation attempts to distribute the requested number |
229 | of huge pages over all on-lines nodes with memory. |
230 | |
231 | Per Node Hugepages Attributes |
232 | |
233 | A subset of the contents of the root huge page control directory in sysfs, |
234 | described above, will be replicated under each the system device of each |
235 | NUMA node with memory in: |
236 | |
237 | /sys/devices/system/node/node[0-9]*/hugepages/ |
238 | |
239 | Under this directory, the subdirectory for each supported huge page size |
240 | contains the following attribute files: |
241 | |
242 | nr_hugepages |
243 | free_hugepages |
244 | surplus_hugepages |
245 | |
246 | The free_' and surplus_' attribute files are read-only. They return the number |
247 | of free and surplus [overcommitted] huge pages, respectively, on the parent |
248 | node. |
249 | |
250 | The nr_hugepages attribute returns the total number of huge pages on the |
251 | specified node. When this attribute is written, the number of persistent huge |
252 | pages on the parent node will be adjusted to the specified value, if sufficient |
253 | resources exist, regardless of the task's mempolicy or cpuset constraints. |
254 | |
255 | Note that the number of overcommit and reserve pages remain global quantities, |
256 | as we don't know until fault time, when the faulting task's mempolicy is |
257 | applied, from which node the huge page allocation will be attempted. |
258 | |
259 | |
260 | Using Huge Pages |
261 | |
262 | If the user applications are going to request huge pages using mmap system |
263 | call, then it is required that system administrator mount a file system of |
264 | type hugetlbfs: |
265 | |
266 | mount -t hugetlbfs \ |
267 | -o uid=<value>,gid=<value>,mode=<value>,size=<value>,nr_inodes=<value> \ |
268 | none /mnt/huge |
269 | |
270 | This command mounts a (pseudo) filesystem of type hugetlbfs on the directory |
271 | /mnt/huge. Any files created on /mnt/huge uses huge pages. The uid and gid |
272 | options sets the owner and group of the root of the file system. By default |
273 | the uid and gid of the current process are taken. The mode option sets the |
274 | mode of root of file system to value & 0777. This value is given in octal. |
275 | By default the value 0755 is picked. The size option sets the maximum value of |
276 | memory (huge pages) allowed for that filesystem (/mnt/huge). The size is |
277 | rounded down to HPAGE_SIZE. The option nr_inodes sets the maximum number of |
278 | inodes that /mnt/huge can use. If the size or nr_inodes option is not |
279 | provided on command line then no limits are set. For size and nr_inodes |
280 | options, you can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For |
281 | example, size=2K has the same meaning as size=2048. |
282 | |
283 | While read system calls are supported on files that reside on hugetlb |
284 | file systems, write system calls are not. |
285 | |
286 | Regular chown, chgrp, and chmod commands (with right permissions) could be |
287 | used to change the file attributes on hugetlbfs. |
288 | |
289 | Also, it is important to note that no such mount command is required if the |
290 | applications are going to use only shmat/shmget system calls or mmap with |
291 | MAP_HUGETLB. Users who wish to use hugetlb page via shared memory segment |
292 | should be a member of a supplementary group and system admin needs to |
293 | configure that gid into /proc/sys/vm/hugetlb_shm_group. It is possible for |
294 | same or different applications to use any combination of mmaps and shm* |
295 | calls, though the mount of filesystem will be required for using mmap calls |
296 | without MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see |
297 | map_hugetlb.c. |
298 | |
299 | ******************************************************************* |
300 | |
301 | /* |
302 | * hugepage-shm: see Documentation/vm/hugepage-shm.c |
303 | */ |
304 | |
305 | ******************************************************************* |
306 | |
307 | /* |
308 | * hugepage-mmap: see Documentation/vm/hugepage-mmap.c |
309 | */ |
310 |
Branches:
ben-wpan
ben-wpan-stefan
javiroman/ks7010
jz-2.6.34
jz-2.6.34-rc5
jz-2.6.34-rc6
jz-2.6.34-rc7
jz-2.6.35
jz-2.6.36
jz-2.6.37
jz-2.6.38
jz-2.6.39
jz-3.0
jz-3.1
jz-3.11
jz-3.12
jz-3.13
jz-3.15
jz-3.16
jz-3.18-dt
jz-3.2
jz-3.3
jz-3.4
jz-3.5
jz-3.6
jz-3.6-rc2-pwm
jz-3.9
jz-3.9-clk
jz-3.9-rc8
jz47xx
jz47xx-2.6.38
master
Tags:
od-2011-09-04
od-2011-09-18
v2.6.34-rc5
v2.6.34-rc6
v2.6.34-rc7
v3.9