kernel_optimize_test/mm
Hugh Dickins 1ae7000630 [PATCH] holepunch: fix shmem_truncate_range punch locking
Miklos Szeredi observes that during truncation of shmem page directories,
info->lock is released to improve latency (after lowering i_size and
next_index to exclude races); but this is quite wrong for holepunching, which
receives no such protection from i_size or next_index, and is left vulnerable
to races with shmem_unuse, shmem_getpage and shmem_writepage.

Hold info->lock throughout when holepunching?  No, any user could prevent
rescheduling for far too long.  Instead take info->lock just when needed: in
shmem_free_swp when removing the swap entries, and whenever removing a
directory page from the level above.  But so long as we remove before
scanning, we can safely skip taking the lock at the lower levels, except at
misaligned start and end of the hole.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: Miklos Szeredi <mszeredi@suse.cz>
Cc: Badari Pulavarty <pbadari@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-03-29 08:22:25 -07:00
..
allocpercpu.c
backing-dev.c
bootmem.c
bounce.c block: blk_max_pfn is somtimes wrong 2007-03-27 08:52:47 +02:00
fadvise.c
filemap_xip.c
filemap.c
filemap.h
fremap.c
highmem.c
hugetlb.c
internal.h
Kconfig
madvise.c
Makefile
memory_hotplug.c
memory.c
mempolicy.c
mempool.c
migrate.c
mincore.c
mlock.c
mmap.c
mmzone.c
mprotect.c
mremap.c
msync.c
nommu.c [PATCH] NOMMU: make SYSV SHM nattch work correctly 2007-03-22 19:39:06 -07:00
oom_kill.c
page_alloc.c
page_io.c
page-writeback.c
pdflush.c
prio_tree.c
readahead.c
rmap.c
shmem_acl.c
shmem.c [PATCH] holepunch: fix shmem_truncate_range punch locking 2007-03-29 08:22:25 -07:00
slab.c
slob.c
sparse.c
swap_state.c
swap.c
swapfile.c
thrash.c
tiny-shmem.c
truncate.c
util.c
vmalloc.c
vmscan.c
vmstat.c