kernel_optimize_test/mm
Yuri Tikhonov 61609d01cb shmem: fix division by zero
Fix a division by zero which we have in shmem_truncate_range() and
shmem_unuse_inode() when using big PAGE_SIZE values (e.g.  256kB on
ppc44x).

With 256kB PAGE_SIZE, the ENTRIES_PER_PAGEPAGE constant becomes too large
(0x1.0000.0000) on a 32-bit kernel, so this patch just changes its type
from 'unsigned long' to 'unsigned long long'.

Hugh: reverted its unsigned long longs in shmem_truncate_range() and
shmem_getpage(): the pagecache index cannot be more than an unsigned long,
so the divisions by zero occurred in unreached code.  It's a pity we need
any ULL arithmetic here, but I found no pretty way to avoid it.

Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-13 15:04:32 -07:00
..
allocpercpu.c
backing-dev.c
bootmem.c
bounce.c
debug-pagealloc.c
dmapool.c
fadvise.c
failslab.c
filemap_xip.c
filemap.c
fremap.c
highmem.c
hugetlb.c
internal.h
Kconfig
Kconfig.debug
maccess.c
madvise.c
Makefile
memcontrol.c memcg: remove warning when CONFIG_DEBUG_VM=n 2009-04-13 15:04:32 -07:00
memory_hotplug.c
memory.c
mempolicy.c
mempool.c
migrate.c
mincore.c
mlock.c
mm_init.c
mmap.c
mmu_notifier.c
mmzone.c
mprotect.c
mremap.c
msync.c
nommu.c
oom_kill.c
page_alloc.c
page_cgroup.c
page_io.c
page_isolation.c
page-writeback.c
pagewalk.c
pdflush.c
percpu.c
prio_tree.c
quicklist.c
readahead.c
rmap.c
shmem_acl.c
shmem.c shmem: fix division by zero 2009-04-13 15:04:32 -07:00
slab.c
slob.c
slub.c
sparse-vmemmap.c
sparse.c
swap_state.c
swap.c
swapfile.c
thrash.c
truncate.c
util.c mm: document get_user_pages_fast() 2009-04-13 15:04:32 -07:00
vmalloc.c
vmscan.c
vmstat.c