tmp_suning_uos_patched/kernel/sched
Joonsoo Kim d31980846f sched: Move up affinity check to mitigate useless redoing overhead
Currently, LBF_ALL_PINNED is cleared after affinity check is
passed. So, if task migration is skipped by small load value or
small imbalance value in move_tasks(), we don't clear
LBF_ALL_PINNED. At last, we trigger 'redo' in load_balance().

Imbalance value is often so small that any tasks cannot be moved
to other cpus and, of course, this situation may be continued
after we change the target cpu. So this patch move up affinity
check code and clear LBF_ALL_PINNED before evaluating load value
in order to mitigate useless redoing overhead.

In addition, re-order some comments correctly.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Jason Low <jason.low2@hp.com>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1366705662-3587-5-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-04-24 08:52:44 +02:00
..
auto_group.c
auto_group.h
clock.c
core.c sched/cpuacct: Initialize root cpuacct earlier 2013-04-10 13:54:20 +02:00
cpuacct.c sched/cpuacct/UML: Fix header file dependency bug on the UML build 2013-04-10 15:12:41 +02:00
cpuacct.h sched/cpuacct: Initialize root cpuacct earlier 2013-04-10 13:54:20 +02:00
cpupri.c
cpupri.h
cputime.c sched/cpuacct: Add cpuacct_acount_field() 2013-04-10 13:54:17 +02:00
debug.c
fair.c sched: Move up affinity check to mitigate useless redoing overhead 2013-04-24 08:52:44 +02:00
features.h
idle_task.c sched: Fix wrong rq's runnable_avg update with rt tasks 2013-04-21 11:22:52 +02:00
Makefile sched: Split cpuacct code out of core.c 2013-04-10 13:54:15 +02:00
rt.c
sched.h sched: Fix wrong rq's runnable_avg update with rt tasks 2013-04-21 11:22:52 +02:00
stats.c
stats.h
stop_task.c