lib/rhashtable.c: simplify a strange allocation pattern

alloc_bucket_locks allocation pattern is quite unusual.  We are
preferring vmalloc when CONFIG_NUMA is enabled.  The rationale is that
vmalloc will respect the memory policy of the current process and so the
backing memory will get distributed over multiple nodes if the requester
is configured properly.  At least that is the intention, in reality
rhastable is shrunk and expanded from a kernel worker so no mempolicy
can be assumed.

Let's just simplify the code and use kvmalloc helper, which is a
transparent way to use kmalloc with vmalloc fallback, if the caller is
allowed to block and use the flag otherwise.

Link: http://lkml.kernel.org/r/20170306103032.2540-4-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Tom Herbert <tom@herbertland.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Michal Hocko 2017-05-08 15:57:18 -07:00 committed by Linus Torvalds
parent 6c5ab6511f
commit 43ca5bc4f7

View File

@ -86,16 +86,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
size = min(size, 1U << tbl->nest); size = min(size, 1U << tbl->nest);
if (sizeof(spinlock_t) != 0) { if (sizeof(spinlock_t) != 0) {
tbl->locks = NULL; if (gfpflags_allow_blocking(gfp))
#ifdef CONFIG_NUMA tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
if (size * sizeof(spinlock_t) > PAGE_SIZE && else
gfp == GFP_KERNEL)
tbl->locks = vmalloc(size * sizeof(spinlock_t));
#endif
if (gfp != GFP_KERNEL)
gfp |= __GFP_NOWARN | __GFP_NORETRY;
if (!tbl->locks)
tbl->locks = kmalloc_array(size, sizeof(spinlock_t), tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
gfp); gfp);
if (!tbl->locks) if (!tbl->locks)