mm/swapfile.c: delete the "last_in_cluster < scan_base" loop in the body of scan_swap_map()

Via commit ebc2a1a691 ("swap: make cluster allocation per-cpu"), we
can find that all SWP_SOLIDSTATE "seek is cheap"(SSD case) has already
gone to si->cluster_info scan_swap_map_try_ssd_cluster() route.  So that
the "last_in_cluster < scan_base" loop in the body of scan_swap_map()
has already become a dead code snippet, and it should have been deleted.

This patch is to delete the redundant loop as Hugh and Shaohua
suggested.

[hughd@google.com: fix comment, simplify code]
Signed-off-by: Chen Yucong <slaoub@gmail.com>
Cc: Shaohua Li <shli@kernel.org>
Acked-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Chen Yucong 2014-06-04 16:10:57 -07:00 committed by Linus Torvalds
parent 100873d7a7
commit 50088c4409

View File

@ -523,13 +523,10 @@ static unsigned long scan_swap_map(struct swap_info_struct *si,
/* /*
* If seek is expensive, start searching for new cluster from * If seek is expensive, start searching for new cluster from
* start of partition, to minimize the span of allocated swap. * start of partition, to minimize the span of allocated swap.
* But if seek is cheap, search from our current position, so * If seek is cheap, that is the SWP_SOLIDSTATE si->cluster_info
* that swap is allocated from all over the partition: if the * case, just handled by scan_swap_map_try_ssd_cluster() above.
* Flash Translation Layer only remaps within limited zones,
* we don't want to wear out the first zone too quickly.
*/ */
if (!(si->flags & SWP_SOLIDSTATE)) scan_base = offset = si->lowest_bit;
scan_base = offset = si->lowest_bit;
last_in_cluster = offset + SWAPFILE_CLUSTER - 1; last_in_cluster = offset + SWAPFILE_CLUSTER - 1;
/* Locate the first empty (unaligned) cluster */ /* Locate the first empty (unaligned) cluster */
@ -549,26 +546,6 @@ static unsigned long scan_swap_map(struct swap_info_struct *si,
} }
} }
offset = si->lowest_bit;
last_in_cluster = offset + SWAPFILE_CLUSTER - 1;
/* Locate the first empty (unaligned) cluster */
for (; last_in_cluster < scan_base; offset++) {
if (si->swap_map[offset])
last_in_cluster = offset + SWAPFILE_CLUSTER;
else if (offset == last_in_cluster) {
spin_lock(&si->lock);
offset -= SWAPFILE_CLUSTER - 1;
si->cluster_next = offset;
si->cluster_nr = SWAPFILE_CLUSTER - 1;
goto checks;
}
if (unlikely(--latency_ration < 0)) {
cond_resched();
latency_ration = LATENCY_LIMIT;
}
}
offset = scan_base; offset = scan_base;
spin_lock(&si->lock); spin_lock(&si->lock);
si->cluster_nr = SWAPFILE_CLUSTER - 1; si->cluster_nr = SWAPFILE_CLUSTER - 1;