Root/mm/Kconfig

1config SELECT_MEMORY_MODEL
2    def_bool y
3    depends on ARCH_SELECT_MEMORY_MODEL
4
5choice
6    prompt "Memory model"
7    depends on SELECT_MEMORY_MODEL
8    default DISCONTIGMEM_MANUAL if ARCH_DISCONTIGMEM_DEFAULT
9    default SPARSEMEM_MANUAL if ARCH_SPARSEMEM_DEFAULT
10    default FLATMEM_MANUAL
11
12config FLATMEM_MANUAL
13    bool "Flat Memory"
14    depends on !(ARCH_DISCONTIGMEM_ENABLE || ARCH_SPARSEMEM_ENABLE) || ARCH_FLATMEM_ENABLE
15    help
16      This option allows you to change some of the ways that
17      Linux manages its memory internally. Most users will
18      only have one option here: FLATMEM. This is normal
19      and a correct option.
20
21      Some users of more advanced features like NUMA and
22      memory hotplug may have different options here.
23      DISCONTIGMEM is an more mature, better tested system,
24      but is incompatible with memory hotplug and may suffer
25      decreased performance over SPARSEMEM. If unsure between
26      "Sparse Memory" and "Discontiguous Memory", choose
27      "Discontiguous Memory".
28
29      If unsure, choose this option (Flat Memory) over any other.
30
31config DISCONTIGMEM_MANUAL
32    bool "Discontiguous Memory"
33    depends on ARCH_DISCONTIGMEM_ENABLE
34    help
35      This option provides enhanced support for discontiguous
36      memory systems, over FLATMEM. These systems have holes
37      in their physical address spaces, and this option provides
38      more efficient handling of these holes. However, the vast
39      majority of hardware has quite flat address spaces, and
40      can have degraded performance from the extra overhead that
41      this option imposes.
42
43      Many NUMA configurations will have this as the only option.
44
45      If unsure, choose "Flat Memory" over this option.
46
47config SPARSEMEM_MANUAL
48    bool "Sparse Memory"
49    depends on ARCH_SPARSEMEM_ENABLE
50    help
51      This will be the only option for some systems, including
52      memory hotplug systems. This is normal.
53
54      For many other systems, this will be an alternative to
55      "Discontiguous Memory". This option provides some potential
56      performance benefits, along with decreased code complexity,
57      but it is newer, and more experimental.
58
59      If unsure, choose "Discontiguous Memory" or "Flat Memory"
60      over this option.
61
62endchoice
63
64config DISCONTIGMEM
65    def_bool y
66    depends on (!SELECT_MEMORY_MODEL && ARCH_DISCONTIGMEM_ENABLE) || DISCONTIGMEM_MANUAL
67
68config SPARSEMEM
69    def_bool y
70    depends on (!SELECT_MEMORY_MODEL && ARCH_SPARSEMEM_ENABLE) || SPARSEMEM_MANUAL
71
72config FLATMEM
73    def_bool y
74    depends on (!DISCONTIGMEM && !SPARSEMEM) || FLATMEM_MANUAL
75
76config FLAT_NODE_MEM_MAP
77    def_bool y
78    depends on !SPARSEMEM
79
80#
81# Both the NUMA code and DISCONTIGMEM use arrays of pg_data_t's
82# to represent different areas of memory. This variable allows
83# those dependencies to exist individually.
84#
85config NEED_MULTIPLE_NODES
86    def_bool y
87    depends on DISCONTIGMEM || NUMA
88
89config HAVE_MEMORY_PRESENT
90    def_bool y
91    depends on ARCH_HAVE_MEMORY_PRESENT || SPARSEMEM
92
93#
94# SPARSEMEM_EXTREME (which is the default) does some bootmem
95# allocations when memory_present() is called. If this cannot
96# be done on your architecture, select this option. However,
97# statically allocating the mem_section[] array can potentially
98# consume vast quantities of .bss, so be careful.
99#
100# This option will also potentially produce smaller runtime code
101# with gcc 3.4 and later.
102#
103config SPARSEMEM_STATIC
104    bool
105
106#
107# Architecture platforms which require a two level mem_section in SPARSEMEM
108# must select this option. This is usually for architecture platforms with
109# an extremely sparse physical address space.
110#
111config SPARSEMEM_EXTREME
112    def_bool y
113    depends on SPARSEMEM && !SPARSEMEM_STATIC
114
115config SPARSEMEM_VMEMMAP_ENABLE
116    bool
117
118config SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
119    def_bool y
120    depends on SPARSEMEM && X86_64
121
122config SPARSEMEM_VMEMMAP
123    bool "Sparse Memory virtual memmap"
124    depends on SPARSEMEM && SPARSEMEM_VMEMMAP_ENABLE
125    default y
126    help
127     SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise
128     pfn_to_page and page_to_pfn operations. This is the most
129     efficient option when sufficient kernel resources are available.
130
131config HAVE_MEMBLOCK
132    boolean
133
134config HAVE_MEMBLOCK_NODE_MAP
135    boolean
136
137config ARCH_DISCARD_MEMBLOCK
138    boolean
139
140config NO_BOOTMEM
141    boolean
142
143config MEMORY_ISOLATION
144    boolean
145
146config MOVABLE_NODE
147    boolean "Enable to assign a node which has only movable memory"
148    depends on HAVE_MEMBLOCK
149    depends on NO_BOOTMEM
150    depends on X86_64
151    depends on NUMA
152    default n
153    help
154      Allow a node to have only movable memory. Pages used by the kernel,
155      such as direct mapping pages cannot be migrated. So the corresponding
156      memory device cannot be hotplugged. This option allows users to
157      online all the memory of a node as movable memory so that the whole
158      node can be hotplugged. Users who don't use the memory hotplug
159      feature are fine with this option on since they don't online memory
160      as movable.
161
162      Say Y here if you want to hotplug a whole node.
163      Say N here if you want kernel to use memory on all nodes evenly.
164
165#
166# Only be set on architectures that have completely implemented memory hotplug
167# feature. If you are not sure, don't touch it.
168#
169config HAVE_BOOTMEM_INFO_NODE
170    def_bool n
171
172# eventually, we can have this option just 'select SPARSEMEM'
173config MEMORY_HOTPLUG
174    bool "Allow for memory hot-add"
175    depends on SPARSEMEM || X86_64_ACPI_NUMA
176    depends on ARCH_ENABLE_MEMORY_HOTPLUG
177    depends on (IA64 || X86 || PPC_BOOK3S_64 || SUPERH || S390)
178
179config MEMORY_HOTPLUG_SPARSE
180    def_bool y
181    depends on SPARSEMEM && MEMORY_HOTPLUG
182
183config MEMORY_HOTREMOVE
184    bool "Allow for memory hot remove"
185    select MEMORY_ISOLATION
186    select HAVE_BOOTMEM_INFO_NODE if (X86_64 || PPC64)
187    depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE
188    depends on MIGRATION
189
190#
191# If we have space for more page flags then we can enable additional
192# optimizations and functionality.
193#
194# Regular Sparsemem takes page flag bits for the sectionid if it does not
195# use a virtual memmap. Disable extended page flags for 32 bit platforms
196# that require the use of a sectionid in the page flags.
197#
198config PAGEFLAGS_EXTENDED
199    def_bool y
200    depends on 64BIT || SPARSEMEM_VMEMMAP || !SPARSEMEM
201
202# Heavily threaded applications may benefit from splitting the mm-wide
203# page_table_lock, so that faults on different parts of the user address
204# space can be handled with less contention: split it at this NR_CPUS.
205# Default to 4 for wider testing, though 8 might be more appropriate.
206# ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
207# PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
208# DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page.
209#
210config SPLIT_PTLOCK_CPUS
211    int
212    default "999999" if ARM && !CPU_CACHE_VIPT
213    default "999999" if PARISC && !PA20
214    default "999999" if DEBUG_SPINLOCK || DEBUG_LOCK_ALLOC
215    default "4"
216
217#
218# support for memory balloon compaction
219config BALLOON_COMPACTION
220    bool "Allow for balloon memory compaction/migration"
221    def_bool y
222    depends on COMPACTION && VIRTIO_BALLOON
223    help
224      Memory fragmentation introduced by ballooning might reduce
225      significantly the number of 2MB contiguous memory blocks that can be
226      used within a guest, thus imposing performance penalties associated
227      with the reduced number of transparent huge pages that could be used
228      by the guest workload. Allowing the compaction & migration for memory
229      pages enlisted as being part of memory balloon devices avoids the
230      scenario aforementioned and helps improving memory defragmentation.
231
232#
233# support for memory compaction
234config COMPACTION
235    bool "Allow for memory compaction"
236    def_bool y
237    select MIGRATION
238    depends on MMU
239    help
240      Allows the compaction of memory for the allocation of huge pages.
241
242#
243# support for page migration
244#
245config MIGRATION
246    bool "Page migration"
247    def_bool y
248    depends on (NUMA || ARCH_ENABLE_MEMORY_HOTREMOVE || COMPACTION || CMA) && MMU
249    help
250      Allows the migration of the physical location of pages of processes
251      while the virtual addresses are not changed. This is useful in
252      two situations. The first is on NUMA systems to put pages nearer
253      to the processors accessing. The second is when allocating huge
254      pages as migration can relocate pages to satisfy a huge page
255      allocation instead of reclaiming.
256
257config PHYS_ADDR_T_64BIT
258    def_bool 64BIT || ARCH_PHYS_ADDR_T_64BIT
259
260config ZONE_DMA_FLAG
261    int
262    default "0" if !ZONE_DMA
263    default "1"
264
265config BOUNCE
266    bool "Enable bounce buffers"
267    default y
268    depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
269    help
270      Enable bounce buffers for devices that cannot access
271      the full range of memory available to the CPU. Enabled
272      by default when ZONE_DMA or HIGHMEM is selected, but you
273      may say n to override this.
274
275# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
276# have more than 4GB of memory, but we don't currently use the IOTLB to present
277# a 32-bit address to OHCI. So we need to use a bounce pool instead.
278#
279# We also use the bounce pool to provide stable page writes for jbd. jbd
280# initiates buffer writeback without locking the page or setting PG_writeback,
281# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
282# a major rework effort. Instead, use the bounce buffer to snapshot pages
283# (until jbd goes away). The only jbd user is ext3.
284config NEED_BOUNCE_POOL
285    bool
286    default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && JBD)
287
288config NR_QUICK
289    int
290    depends on QUICKLIST
291    default "2" if AVR32
292    default "1"
293
294config VIRT_TO_BUS
295    bool
296    help
297      An architecture should select this if it implements the
298      deprecated interface virt_to_bus(). All new architectures
299      should probably not select this.
300
301
302config MMU_NOTIFIER
303    bool
304
305config KSM
306    bool "Enable KSM for page merging"
307    depends on MMU
308    help
309      Enable Kernel Samepage Merging: KSM periodically scans those areas
310      of an application's address space that an app has advised may be
311      mergeable. When it finds pages of identical content, it replaces
312      the many instances by a single page with that content, so
313      saving memory until one or another app needs to modify the content.
314      Recommended for use with KVM, or with other duplicative applications.
315      See Documentation/vm/ksm.txt for more information: KSM is inactive
316      until a program has madvised that an area is MADV_MERGEABLE, and
317      root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
318
319config DEFAULT_MMAP_MIN_ADDR
320        int "Low address space to protect from user allocation"
321    depends on MMU
322        default 4096
323        help
324      This is the portion of low virtual memory which should be protected
325      from userspace allocation. Keeping a user from writing to low pages
326      can help reduce the impact of kernel NULL pointer bugs.
327
328      For most ia64, ppc64 and x86 users with lots of address space
329      a value of 65536 is reasonable and should cause no problems.
330      On arm and other archs it should not be higher than 32768.
331      Programs which use vm86 functionality or have some need to map
332      this low address space will need CAP_SYS_RAWIO or disable this
333      protection by setting the value to 0.
334
335      This value can be changed after boot using the
336      /proc/sys/vm/mmap_min_addr tunable.
337
338config ARCH_SUPPORTS_MEMORY_FAILURE
339    bool
340
341config MEMORY_FAILURE
342    depends on MMU
343    depends on ARCH_SUPPORTS_MEMORY_FAILURE
344    bool "Enable recovery from hardware memory errors"
345    select MEMORY_ISOLATION
346    help
347      Enables code to recover from some memory failures on systems
348      with MCA recovery. This allows a system to continue running
349      even when some of its memory has uncorrected errors. This requires
350      special hardware support and typically ECC memory.
351
352config HWPOISON_INJECT
353    tristate "HWPoison pages injector"
354    depends on MEMORY_FAILURE && DEBUG_KERNEL && PROC_FS
355    select PROC_PAGE_MONITOR
356
357config NOMMU_INITIAL_TRIM_EXCESS
358    int "Turn on mmap() excess space trimming before booting"
359    depends on !MMU
360    default 1
361    help
362      The NOMMU mmap() frequently needs to allocate large contiguous chunks
363      of memory on which to store mappings, but it can only ask the system
364      allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
365      more than it requires. To deal with this, mmap() is able to trim off
366      the excess and return it to the allocator.
367
368      If trimming is enabled, the excess is trimmed off and returned to the
369      system allocator, which can cause extra fragmentation, particularly
370      if there are a lot of transient processes.
371
372      If trimming is disabled, the excess is kept, but not used, which for
373      long-term mappings means that the space is wasted.
374
375      Trimming can be dynamically controlled through a sysctl option
376      (/proc/sys/vm/nr_trim_pages) which specifies the minimum number of
377      excess pages there must be before trimming should occur, or zero if
378      no trimming is to occur.
379
380      This option specifies the initial value of this option. The default
381      of 1 says that all excess pages should be trimmed.
382
383      See Documentation/nommu-mmap.txt for more information.
384
385config TRANSPARENT_HUGEPAGE
386    bool "Transparent Hugepage Support"
387    depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE
388    select COMPACTION
389    help
390      Transparent Hugepages allows the kernel to use huge pages and
391      huge tlb transparently to the applications whenever possible.
392      This feature can improve computing performance to certain
393      applications by speeding up page faults during memory
394      allocation, by reducing the number of tlb misses and by speeding
395      up the pagetable walking.
396
397      If memory constrained on embedded, you may want to say N.
398
399choice
400    prompt "Transparent Hugepage Support sysfs defaults"
401    depends on TRANSPARENT_HUGEPAGE
402    default TRANSPARENT_HUGEPAGE_ALWAYS
403    help
404      Selects the sysfs defaults for Transparent Hugepage Support.
405
406    config TRANSPARENT_HUGEPAGE_ALWAYS
407        bool "always"
408    help
409      Enabling Transparent Hugepage always, can increase the
410      memory footprint of applications without a guaranteed
411      benefit but it will work automatically for all applications.
412
413    config TRANSPARENT_HUGEPAGE_MADVISE
414        bool "madvise"
415    help
416      Enabling Transparent Hugepage madvise, will only provide a
417      performance improvement benefit to the applications using
418      madvise(MADV_HUGEPAGE) but it won't risk to increase the
419      memory footprint of applications without a guaranteed
420      benefit.
421endchoice
422
423config CROSS_MEMORY_ATTACH
424    bool "Cross Memory Support"
425    depends on MMU
426    default y
427    help
428      Enabling this option adds the system calls process_vm_readv and
429      process_vm_writev which allow a process with the correct privileges
430      to directly read from or write to to another process's address space.
431      See the man page for more details.
432
433#
434# UP and nommu archs use km based percpu allocator
435#
436config NEED_PER_CPU_KM
437    depends on !SMP
438    bool
439    default y
440
441config CLEANCACHE
442    bool "Enable cleancache driver to cache clean pages if tmem is present"
443    default n
444    help
445      Cleancache can be thought of as a page-granularity victim cache
446      for clean pages that the kernel's pageframe replacement algorithm
447      (PFRA) would like to keep around, but can't since there isn't enough
448      memory. So when the PFRA "evicts" a page, it first attempts to use
449      cleancache code to put the data contained in that page into
450      "transcendent memory", memory that is not directly accessible or
451      addressable by the kernel and is of unknown and possibly
452      time-varying size. And when a cleancache-enabled
453      filesystem wishes to access a page in a file on disk, it first
454      checks cleancache to see if it already contains it; if it does,
455      the page is copied into the kernel and a disk access is avoided.
456      When a transcendent memory driver is available (such as zcache or
457      Xen transcendent memory), a significant I/O reduction
458      may be achieved. When none is available, all cleancache calls
459      are reduced to a single pointer-compare-against-NULL resulting
460      in a negligible performance hit.
461
462      If unsure, say Y to enable cleancache
463
464config FRONTSWAP
465    bool "Enable frontswap to cache swap pages if tmem is present"
466    depends on SWAP
467    default n
468    help
469      Frontswap is so named because it can be thought of as the opposite
470      of a "backing" store for a swap device. The data is stored into
471      "transcendent memory", memory that is not directly accessible or
472      addressable by the kernel and is of unknown and possibly
473      time-varying size. When space in transcendent memory is available,
474      a significant swap I/O reduction may be achieved. When none is
475      available, all frontswap calls are reduced to a single pointer-
476      compare-against-NULL resulting in a negligible performance hit
477      and swap data is stored as normal on the matching swap device.
478
479      If unsure, say Y to enable frontswap.
480
481config CMA
482    bool "Contiguous Memory Allocator"
483    depends on HAVE_MEMBLOCK && MMU
484    select MIGRATION
485    select MEMORY_ISOLATION
486    help
487      This enables the Contiguous Memory Allocator which allows other
488      subsystems to allocate big physically-contiguous blocks of memory.
489      CMA reserves a region of memory and allows only movable pages to
490      be allocated from it. This way, the kernel can use the memory for
491      pagecache and when a subsystem requests for contiguous area, the
492      allocated pages are migrated away to serve the contiguous request.
493
494      If unsure, say "n".
495
496config CMA_DEBUG
497    bool "CMA debug messages (DEVELOPMENT)"
498    depends on DEBUG_KERNEL && CMA
499    help
500      Turns on debug messages in CMA. This produces KERN_DEBUG
501      messages for every CMA call as well as various messages while
502      processing calls such as dma_alloc_from_contiguous().
503      This option does not affect warning and error messages.
504
505config ZBUD
506    tristate
507    default n
508    help
509      A special purpose allocator for storing compressed pages.
510      It is designed to store up to two compressed pages per physical
511      page. While this design limits storage density, it has simple and
512      deterministic reclaim properties that make it preferable to a higher
513      density approach when reclaim will be used.
514
515config ZSWAP
516    bool "Compressed cache for swap pages (EXPERIMENTAL)"
517    depends on FRONTSWAP && CRYPTO=y
518    select CRYPTO_LZO
519    select ZBUD
520    default n
521    help
522      A lightweight compressed cache for swap pages. It takes
523      pages that are in the process of being swapped out and attempts to
524      compress them into a dynamically allocated RAM-based memory pool.
525      This can result in a significant I/O reduction on swap device and,
526      in the case where decompressing from RAM is faster that swap device
527      reads, can also improve workload performance.
528
529      This is marked experimental because it is a new feature (as of
530      v3.11) that interacts heavily with memory reclaim. While these
531      interactions don't cause any known issues on simple memory setups,
532      they have not be fully explored on the large set of potential
533      configurations and workloads that exist.
534
535config MEM_SOFT_DIRTY
536    bool "Track memory changes"
537    depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY
538    select PROC_PAGE_MONITOR
539    help
540      This option enables memory changes tracking by introducing a
541      soft-dirty bit on pte-s. This bit it set when someone writes
542      into a page just as regular dirty bit, but unlike the latter
543      it can be cleared by hands.
544
545      See Documentation/vm/soft-dirty.txt for more details.
546

Archive Download this file



interactive