tmp_suning_uos_patched/kernel/sched
Tejun Heo feb245e304 sched/core: Allow kthreads to fall back to online && !active cpus
During CPU hotplug, CPU_ONLINE callbacks are run while the CPU is
online but not active.  A CPU_ONLINE callback may create or bind a
kthread so that its cpus_allowed mask only allows the CPU which is
being brought online.  The kthread may start executing before the CPU
is made active and can end up in select_fallback_rq().

In such cases, the expected behavior is selecting the CPU which is
coming online; however, because select_fallback_rq() only chooses from
active CPUs, it determines that the task doesn't have any viable CPU
in its allowed mask and ends up overriding it to cpu_possible_mask.

CPU_ONLINE callbacks should be able to put kthreads on the CPU which
is coming online.  Update select_fallback_rq() so that it follows
cpu_online() rather than cpu_active() for kthreads.

Reported-by: Gautham R Shenoy <ego@linux.vnet.ibm.com>
Tested-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-team@fb.com
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lkml.kernel.org/r/20160616193504.GB3262@mtj.duckdns.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-24 08:26:53 +02:00
..
auto_group.c
auto_group.h
clock.c
completion.c
core.c sched/core: Allow kthreads to fall back to online && !active cpus 2016-06-24 08:26:53 +02:00
cpuacct.c
cpuacct.h
cpudeadline.c
cpudeadline.h
cpufreq_schedutil.c
cpufreq.c
cpupri.c
cpupri.h
cputime.c
deadline.c
debug.c
fair.c sched/fair: Do not announce throttled next buddy in dequeue_task_fair() 2016-06-24 08:26:45 +02:00
features.h
idle_task.c
idle.c
loadavg.c
Makefile
rt.c
sched.h sched/fair: Initialize throttle_count for new task-groups lazily 2016-06-24 08:26:44 +02:00
stats.c
stats.h
stop_task.c
swait.c
wait.c