kernel_optimize_test/kernel/sched
Steven Rostedt (VMware) 8663effb24 sched/core: Call __schedule() from do_idle() without enabling preemption
I finally got around to creating trampolines for dynamically allocated
ftrace_ops with using synchronize_rcu_tasks(). For users of the ftrace
function hook callbacks, like perf, that allocate the ftrace_ops
descriptor via kmalloc() and friends, ftrace was not able to optimize
the functions being traced to use a trampoline because they would also
need to be allocated dynamically. The problem is that they cannot be
freed when CONFIG_PREEMPT is set, as there's no way to tell if a task
was preempted on the trampoline. That was before Paul McKenney
implemented synchronize_rcu_tasks() that would make sure all tasks
(except idle) have scheduled out or have entered user space.

While testing this, I triggered this bug:

 BUG: unable to handle kernel paging request at ffffffffa0230077
 ...
 RIP: 0010:0xffffffffa0230077
 ...
 Call Trace:
  schedule+0x5/0xe0
  schedule_preempt_disabled+0x18/0x30
  do_idle+0x172/0x220

What happened was that the idle task was preempted on the trampoline.
As synchronize_rcu_tasks() ignores the idle thread, there's nothing
that lets ftrace know that the idle task was preempted on a trampoline.

The idle task shouldn't need to ever enable preemption. The idle task
is simply a loop that calls schedule or places the cpu into idle mode.
In fact, having preemption enabled is inefficient, because it can
happen when idle is just about to call schedule anyway, which would
cause schedule to be called twice. Once for when the interrupt came in
and was returning back to normal context, and then again in the normal
path that the idle loop is running in, which would be pointless, as it
had already scheduled.

The only reason schedule_preempt_disable() enables preemption is to be
able to call sched_submit_work(), which requires preemption enabled. As
this is a nop when the task is in the RUNNING state, and idle is always
in the running state, there's no reason that idle needs to enable
preemption. But that means it cannot use schedule_preempt_disable() as
other callers of that function require calling sched_submit_work().

Adding a new function local to kernel/sched/ that allows idle to call
the scheduler without enabling preemption, fixes the
synchronize_rcu_tasks() issue, as well as removes the pointless spurious
schedule calls caused by interrupts happening in the brief window where
preemption is enabled just before it calls schedule.

Reviewed: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170414084809.3dacde2a@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-05-15 10:09:12 +02:00
..
autogroup.c sched/autogroup: Rename auto_group.[ch] to autogroup.[ch] 2017-02-08 09:01:11 +01:00
autogroup.h sched/headers: Prepare for new header dependencies before moving code to <linux/sched/autogroup.h> 2017-03-02 08:42:28 +01:00
clock.c sched/clock: Fix broken stable to unstable transfer 2017-03-27 10:23:48 +02:00
completion.c sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h> 2017-03-02 08:42:34 +01:00
core.c sched/core: Call __schedule() from do_idle() without enabling preemption 2017-05-15 10:09:12 +02:00
cpuacct.c sched/cputime: Convert kcpustat to nsecs 2017-02-01 09:13:47 +01:00
cpuacct.h
cpudeadline.c sched/core: Remove the tsk_cpus_allowed() wrapper 2017-03-02 08:42:24 +01:00
cpudeadline.h sched/deadline: Split cpudl_set() into cpudl_set() and cpudl_clear() 2016-09-05 13:29:43 +02:00
cpufreq_schedutil.c cpufreq: schedutil: Use policy-dependent transition delays 2017-04-17 18:37:27 +02:00
cpufreq.c cpufreq / sched: Pass flags to cpufreq_update_util() 2016-08-16 22:14:55 +02:00
cpupri.c sched/core: Remove the tsk_cpus_allowed() wrapper 2017-03-02 08:42:24 +01:00
cpupri.h
cputime.c sched/cputime: Fix ksoftirqd cputime accounting regression 2017-04-27 09:08:26 +02:00
deadline.c sched/deadline: Use deadline instead of period when calculating overflow 2017-03-16 09:37:38 +01:00
debug.c sched/headers: Prepare to move the task_lock()/unlock() APIs to <linux/sched/task.h> 2017-03-02 08:42:38 +01:00
fair.c Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial 2017-05-02 19:09:35 -07:00
features.h sched/core: Add WARNING for multiple update_rq_clock() calls 2017-03-16 09:46:21 +01:00
idle_task.c sched/core: Add wrappers for lockdep_(un)pin_lock() 2017-01-14 11:29:30 +01:00
idle.c sched/core: Call __schedule() from do_idle() without enabling preemption 2017-05-15 10:09:12 +02:00
loadavg.c sched/loadavg: Use {READ,WRITE}_ONCE() for sample window 2017-03-16 09:21:01 +01:00
Makefile sched/autogroup: Rename auto_group.[ch] to autogroup.[ch] 2017-02-08 09:01:11 +01:00
rt.c sched/rt: Add comments describing the RT IPI pull method 2017-03-16 09:41:35 +01:00
sched-pelt.h sched/fair: Move the PELT constants into a generated header 2017-04-14 10:26:37 +02:00
sched.h sched/core: Call __schedule() from do_idle() without enabling preemption 2017-05-15 10:09:12 +02:00
stats.c
stats.h sched/headers: Move cputime functionality from <linux/sched.h> and <linux/cputime.h> into <linux/sched/cputime.h> 2017-03-03 01:45:22 +01:00
stop_task.c sched/core: Add wrappers for lockdep_(un)pin_lock() 2017-01-14 11:29:30 +01:00
swait.c sched/headers: Prepare to move signal wakeup & sigpending methods from <linux/sched.h> into <linux/sched/signal.h> 2017-03-02 08:42:32 +01:00
topology.c sched/topology: Split out scheduler topology code from core.c into topology.c 2017-02-07 10:58:12 +01:00
wait.c sched/headers: fix up header file dependency on <linux/sched/signal.h> 2017-03-08 10:36:03 -08:00