forked from luck/tmp_suning_uos_patched
mm/fs: fix pessimization in hole-punching pagecache
I wanted to revert my v3.1 commit d0823576bf
("mm: pincer in
truncate_inode_pages_range"), to keep truncate_inode_pages_range() in
synch with shmem_undo_range(); but have stepped back - a change to
hole-punching in truncate_inode_pages_range() is a change to
hole-punching in every filesystem (except tmpfs) that supports it.
If there's a logical proof why no filesystem can depend for its own
correctness on the pincer guarantee in truncate_inode_pages_range() - an
instant when the entire hole is removed from pagecache - then let's
revisit later. But the evidence is that only tmpfs suffered from the
livelock, and we have no intention of extending hole-punch to ramfs. So
for now just add a few comments (to match or differ from those in
shmem_undo_range()), and fix one silliness noticed in d0823576bf4b...
Its "index == start" addition to the hole-punch termination test was
incomplete: it opened a way for the end condition to be missed, and the
loop go on looking through the radix_tree, all the way to end of file.
Fix that pessimization by resetting index when detected in inner loop.
Note that it's actually hard to hit this case, without the obsessive
concurrent faulting that trinity does: normally all pages are removed in
the initial trylock_page() pass, and this loop finds nothing to do. I
had to "#if 0" out the initial pass to reproduce bug and test fix.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Lukas Czerner <lczerner@redhat.com>
Cc: Dave Jones <davej@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
b1a366500b
commit
792ceaefe6
|
@ -355,14 +355,16 @@ void truncate_inode_pages_range(struct address_space *mapping,
|
|||
for ( ; ; ) {
|
||||
cond_resched();
|
||||
if (!pagevec_lookup_entries(&pvec, mapping, index,
|
||||
min(end - index, (pgoff_t)PAGEVEC_SIZE),
|
||||
indices)) {
|
||||
min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) {
|
||||
/* If all gone from start onwards, we're done */
|
||||
if (index == start)
|
||||
break;
|
||||
/* Otherwise restart to make sure all gone */
|
||||
index = start;
|
||||
continue;
|
||||
}
|
||||
if (index == start && indices[0] >= end) {
|
||||
/* All gone out of hole to be punched, we're done */
|
||||
pagevec_remove_exceptionals(&pvec);
|
||||
pagevec_release(&pvec);
|
||||
break;
|
||||
|
@ -373,8 +375,11 @@ void truncate_inode_pages_range(struct address_space *mapping,
|
|||
|
||||
/* We rely upon deletion not changing page->index */
|
||||
index = indices[i];
|
||||
if (index >= end)
|
||||
if (index >= end) {
|
||||
/* Restart punch to make sure all gone */
|
||||
index = start - 1;
|
||||
break;
|
||||
}
|
||||
|
||||
if (radix_tree_exceptional_entry(page)) {
|
||||
clear_exceptional_entry(mapping, index, page);
|
||||
|
|
Loading…
Reference in New Issue
Block a user