kernel_optimize_test/mm
Dmitry Vyukov 64abdcb243 kasan: eliminate long stalls during quarantine reduction
Currently we dedicate 1/32 of RAM for quarantine and then reduce it by
1/4 of total quarantine size.  This can be a significant amount of
memory.  For example, with 4GB of RAM total quarantine size is 128MB and
it is reduced by 32MB at a time.  With 128GB of RAM total quarantine
size is 4GB and it is reduced by 1GB.  This leads to several problems:

 - freeing 1GB can take tens of seconds, causes rcu stall warnings and
   just introduces unexpected long delays at random places
 - if kmalloc() is called under a mutex, other threads stall on that
   mutex while a thread reduces quarantine
 - threads wait on quarantine_lock while one thread grabs a large batch
   of objects to evict
 - we walk the uncached list of object to free twice which makes all of
   the above worse
 - when a thread frees objects, they are already not accounted against
   global_quarantine.bytes; as the result we can have quarantine_size
   bytes in quarantine + unbounded amount of memory in large batches in
   threads that are in process of freeing

Reduce size of quarantine in smaller batches to reduce the delays.  The
only reason to reduce it in batches is amortization of overheads, the
new batch size of 1MB should be well enough to amortize spinlock
lock/unlock and few function calls.

Plus organize quarantine as a FIFO array of batches.  This allows to not
walk the list in quarantine_reduce() under quarantine_lock, which in
turn reduces contention and is just faster.

This improves performance of heavy load (syzkaller fuzzing) by ~20% with
4 CPUs and 32GB of RAM.  Also this eliminates frequent (every 5 sec)
drops of CPU consumption from ~400% to ~100% (one thread reduces
quarantine while others are waiting on a mutex).

Some reference numbers:
1. Machine with 4 CPUs and 4GB of memory. Quarantine size 128MB.
   Currently we free 32MB at at time.
   With new code we free 1MB at a time (1024 batches, ~128 are used).
2. Machine with 32 CPUs and 128GB of memory. Quarantine size 4GB.
   Currently we free 1GB at at time.
   With new code we free 8MB at a time (1024 batches, ~512 are used).
3. Machine with 4096 CPUs and 1TB of memory. Quarantine size 32GB.
   Currently we free 8GB at at time.
   With new code we free 4MB at a time (16K batches, ~8K are used).

Link: http://lkml.kernel.org/r/1478756952-18695-1-git-send-email-dvyukov@google.com
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-12 18:55:09 -08:00
..
kasan kasan: eliminate long stalls during quarantine reduction 2016-12-12 18:55:09 -08:00
backing-dev.c
balloon_compaction.c
bootmem.c mm: kmemleak: avoid using __va() on addresses that don't have a lowmem mapping 2016-10-11 15:06:33 -07:00
cleancache.c
cma_debug.c
cma.c mm/cma.c: check the max limit for cma allocation 2016-11-11 08:12:37 -08:00
cma.h
compaction.c mm, compaction: fix NR_ISOLATED_* stats for pfn based migration 2016-12-12 18:55:07 -08:00
debug_page_ref.c
debug.c mm, debug: print raw struct page data in __dump_page() 2016-12-12 18:55:08 -08:00
dmapool.c
early_ioremap.c
fadvise.c
failslab.c
filemap.c mm: workingset: restore refault tracking for single-page files 2016-12-12 18:55:08 -08:00
frame_vector.c mm: replace get_vaddr_frames() write/force parameters with gup_flags 2016-10-19 08:11:24 -07:00
frontswap.c
gup.c mm: fix up get_user_pages* comments 2016-12-12 18:55:07 -08:00
highmem.c
huge_memory.c mm: make transparent hugepage size public 2016-12-12 18:55:09 -08:00
hugetlb_cgroup.c
hugetlb.c mm: add tlb_remove_check_page_size_change to track page size change 2016-12-12 18:55:07 -08:00
hwpoison-inject.c
init-mm.c
internal.h
interval_tree.c
Kconfig mm: THP page cache support for ppc64 2016-12-12 18:55:08 -08:00
Kconfig.debug
khugepaged.c mm: THP page cache support for ppc64 2016-12-12 18:55:08 -08:00
kmemcheck.c
kmemleak-test.c
kmemleak.c kmemleak: fix reference to Documentation 2016-12-12 18:55:07 -08:00
ksm.c
list_lru.c mm/list_lru.c: avoid error-path NULL pointer deref 2016-10-27 18:43:42 -07:00
maccess.c
madvise.c mm: add tlb_remove_check_page_size_change to track page size change 2016-12-12 18:55:07 -08:00
Makefile Disable the __builtin_return_address() warning globally after all 2016-10-12 10:23:41 -07:00
memblock.c mm: kmemleak: avoid using __va() on addresses that don't have a lowmem mapping 2016-10-11 15:06:33 -07:00
memcontrol.c mm: memcontrol: use special workqueue for creating per-memcg caches 2016-12-12 18:55:06 -08:00
memory_hotplug.c mm: remove x86-only restriction of movable_node 2016-12-12 18:55:07 -08:00
memory-failure.c mm: hwpoison: fix thp split handling in memory_failure() 2016-11-11 08:12:37 -08:00
memory.c mm: THP page cache support for ppc64 2016-12-12 18:55:08 -08:00
mempolicy.c mm/mempolicy.c: forbid static or relative flags for local NUMA mode 2016-12-12 18:55:07 -08:00
mempool.c
memtest.c
migrate.c lib: radix-tree: check accounting of existing slot replacement users 2016-12-12 18:55:08 -08:00
mincore.c
mlock.c thp: fix corner case of munlock() of PTE-mapped THPs 2016-11-30 16:32:52 -08:00
mm_init.c
mmap.c
mmu_context.c
mmu_notifier.c
mmzone.c
mprotect.c mm/pkeys: generate pkey system call code only if ARCH_HAS_PKEYS is selected 2016-12-12 18:55:07 -08:00
mremap.c mremap: move_ptes: check pte dirty after its removal 2016-11-29 08:20:24 -08:00
msync.c
nobootmem.c mm: kmemleak: avoid using __va() on addresses that don't have a lowmem mapping 2016-10-11 15:06:33 -07:00
nommu.c mm: unexport __get_user_pages() 2016-10-24 19:13:20 -07:00
oom_kill.c
page_alloc.c mm, page_alloc: keep pcp count and list contents in sync if struct page is corrupted 2016-12-12 18:55:08 -08:00
page_counter.c
page_ext.c
page_idle.c
page_io.c
page_isolation.c
page_owner.c
page_poison.c
page-writeback.c
pagewalk.c
percpu-km.c
percpu-vm.c
percpu.c
pgtable-generic.c
process_vm_access.c mm: remove write/force parameters from __get_user_pages_unlocked() 2016-10-18 14:13:37 -07:00
quicklist.c
readahead.c mm: don't cap request size based on read-ahead setting 2016-12-12 18:55:08 -08:00
rmap.c mm, rmap: handle anon_vma_prepare() common case inline 2016-12-12 18:55:08 -08:00
shmem.c lib: radix-tree: update callback for changing leaf nodes 2016-12-12 18:55:08 -08:00
slab_common.c mm/slab_common.c: check kmem_create_cache flags are common 2016-12-12 18:55:06 -08:00
slab.c mm, slab: maintain total slab count instead of active count 2016-12-12 18:55:07 -08:00
slab.h mm, slab: maintain total slab count instead of active count 2016-12-12 18:55:07 -08:00
slob.c slub: move synchronize_sched out of slab_mutex on shrink 2016-12-12 18:55:06 -08:00
slub.c slub: avoid false-postive warning 2016-12-12 18:55:06 -08:00
sparse-vmemmap.c
sparse.c
swap_cgroup.c
swap_state.c
swap.c
swapfile.c mm: add three more cond_resched() in swapoff 2016-12-12 18:55:08 -08:00
truncate.c mm: workingset: move shadow entry tracking to radix tree exceptional tracking 2016-12-12 18:55:08 -08:00
usercopy.c
userfaultfd.c
util.c Merge branch 'mm-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip 2016-10-22 09:39:10 -07:00
vmacache.c
vmalloc.c mm: add preempt points into __purge_vmap_area_lazy() 2016-12-12 18:55:08 -08:00
vmpressure.c
vmscan.c mm/vmscan.c: set correct defer count for shrinker 2016-12-12 18:55:07 -08:00
vmstat.c
workingset.c mm: workingset: update shadow limit to reflect bigger active list 2016-12-12 18:55:08 -08:00
z3fold.c
zbud.c
zpool.c
zsmalloc.c
zswap.c