forked from luck/tmp_suning_uos_patched
sched: Rename sched.c as sched/core.c in comments and Documentation
Most of the stuff from kernel/sched.c was moved to kernel/sched/core.c long time back and the comments/Documentation never got updated. I figured it out when I was going through sched-domains.txt and so thought of fixing it globally. I haven't crossed check if the stuff that is referenced in sched/core.c by all these files is still present and hasn't changed as that wasn't the motive behind this patch. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/cdff76a265326ab8d71922a1db5be599f20aad45.1370329560.git.viresh.kumar@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
8404c90d05
commit
0a0fca9d83
|
@ -373,7 +373,7 @@ can become very uneven.
|
|||
1.7 What is sched_load_balance ?
|
||||
--------------------------------
|
||||
|
||||
The kernel scheduler (kernel/sched.c) automatically load balances
|
||||
The kernel scheduler (kernel/sched/core.c) automatically load balances
|
||||
tasks. If one CPU is underutilized, kernel code running on that
|
||||
CPU will look for tasks on other more overloaded CPUs and move those
|
||||
tasks to itself, within the constraints of such placement mechanisms
|
||||
|
|
|
@ -384,7 +384,7 @@ priority back.
|
|||
__rt_mutex_adjust_prio examines the result of rt_mutex_getprio, and if the
|
||||
result does not equal the task's current priority, then rt_mutex_setprio
|
||||
is called to adjust the priority of the task to the new priority.
|
||||
Note that rt_mutex_setprio is defined in kernel/sched.c to implement the
|
||||
Note that rt_mutex_setprio is defined in kernel/sched/core.c to implement the
|
||||
actual change in priority.
|
||||
|
||||
It is interesting to note that __rt_mutex_adjust_prio can either increase
|
||||
|
|
|
@ -25,7 +25,7 @@ is treated as one entity. The load of a group is defined as the sum of the
|
|||
load of each of its member CPUs, and only when the load of a group becomes
|
||||
out of balance are tasks moved between groups.
|
||||
|
||||
In kernel/sched.c, trigger_load_balance() is run periodically on each CPU
|
||||
In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
|
||||
through scheduler_tick(). It raises a softirq after the next regularly scheduled
|
||||
rebalancing event for the current runqueue has arrived. The actual load
|
||||
balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run
|
||||
|
@ -62,7 +62,7 @@ struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of
|
|||
the specifics and what to tune.
|
||||
|
||||
Architectures may retain the regular override the default SD_*_INIT flags
|
||||
while using the generic domain builder in kernel/sched.c if they wish to
|
||||
while using the generic domain builder in kernel/sched/core.c if they wish to
|
||||
retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
|
||||
can be done by #define'ing ARCH_HASH_SCHED_TUNE.
|
||||
|
||||
|
|
|
@ -137,7 +137,7 @@ don't block on each other (and thus there is no dead-lock wrt interrupts.
|
|||
But when you do the write-lock, you have to use the irq-safe version.
|
||||
|
||||
For an example of being clever with rw-locks, see the "waitqueue_lock"
|
||||
handling in kernel/sched.c - nothing ever _changes_ a wait-queue from
|
||||
handling in kernel/sched/core.c - nothing ever _changes_ a wait-queue from
|
||||
within an interrupt, they only read the queue in order to know whom to
|
||||
wake up. So read-locks are safe (which is good: they are very common
|
||||
indeed), while write-locks need to protect themselves against interrupts.
|
||||
|
|
|
@ -3127,7 +3127,7 @@
|
|||
at process_kern.c:156
|
||||
#3 0x1006a052 in switch_to (prev=0x50072000, next=0x507e8000, last=0x50072000)
|
||||
at process_kern.c:161
|
||||
#4 0x10001d12 in schedule () at sched.c:777
|
||||
#4 0x10001d12 in schedule () at core.c:777
|
||||
#5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71
|
||||
#6 0x1006aa10 in __down_failed () at semaphore.c:157
|
||||
#7 0x1006c5d8 in segv_handler (sc=0x5006e940) at trap_user.c:174
|
||||
|
@ -3191,7 +3191,7 @@
|
|||
at process_kern.c:161
|
||||
161 _switch_to(prev, next);
|
||||
(gdb)
|
||||
#4 0x10001d12 in schedule () at sched.c:777
|
||||
#4 0x10001d12 in schedule () at core.c:777
|
||||
777 switch_to(prev, next, prev);
|
||||
(gdb)
|
||||
#5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71
|
||||
|
|
|
@ -341,7 +341,7 @@ unsigned long get_wchan(struct task_struct *p)
|
|||
* is actually quite ugly. It might be possible to
|
||||
* determine the frame size automatically at build
|
||||
* time by doing this:
|
||||
* - compile sched.c
|
||||
* - compile sched/core.c
|
||||
* - disassemble the resulting sched.o
|
||||
* - look for 'sub sp,??' shortly after '<schedule>:'
|
||||
*/
|
||||
|
|
|
@ -17,7 +17,7 @@ static inline unsigned long cris_swapnwbrlz(unsigned long w)
|
|||
in another register:
|
||||
! __asm__ ("swapnwbr %2\n\tlz %2,%0"
|
||||
! : "=r,r" (res), "=r,X" (dummy) : "1,0" (w));
|
||||
confuses gcc (sched.c, gcc from cris-dist-1.14). */
|
||||
confuses gcc (core.c, gcc from cris-dist-1.14). */
|
||||
|
||||
unsigned long res;
|
||||
__asm__ ("swapnwbr %0 \n\t"
|
||||
|
|
|
@ -1035,7 +1035,7 @@ END(ia64_delay_loop)
|
|||
* Return a CPU-local timestamp in nano-seconds. This timestamp is
|
||||
* NOT synchronized across CPUs its return value must never be
|
||||
* compared against the values returned on another CPU. The usage in
|
||||
* kernel/sched.c ensures that.
|
||||
* kernel/sched/core.c ensures that.
|
||||
*
|
||||
* The return-value of sched_clock() is NOT supposed to wrap-around.
|
||||
* If it did, it would cause some scheduling hiccups (at the worst).
|
||||
|
|
|
@ -27,12 +27,12 @@ unsigned long mt_fpemul_threshold;
|
|||
* FPU affinity with the user's requested processor affinity.
|
||||
* This code is 98% identical with the sys_sched_setaffinity()
|
||||
* and sys_sched_getaffinity() system calls, and should be
|
||||
* updated when kernel/sched.c changes.
|
||||
* updated when kernel/sched/core.c changes.
|
||||
*/
|
||||
|
||||
/*
|
||||
* find_process_by_pid - find a process with a matching PID value.
|
||||
* used in sys_sched_set/getaffinity() in kernel/sched.c, so
|
||||
* used in sys_sched_set/getaffinity() in kernel/sched/core.c, so
|
||||
* cloned here.
|
||||
*/
|
||||
static inline struct task_struct *find_process_by_pid(pid_t pid)
|
||||
|
|
|
@ -476,8 +476,9 @@ einval: li v0, -ENOSYS
|
|||
/*
|
||||
* For FPU affinity scheduling on MIPS MT processors, we need to
|
||||
* intercept sys_sched_xxxaffinity() calls until we get a proper hook
|
||||
* in kernel/sched.c. Considered only temporary we only support these
|
||||
* hooks for the 32-bit kernel - there is no MIPS64 MT processor atm.
|
||||
* in kernel/sched/core.c. Considered only temporary we only support
|
||||
* these hooks for the 32-bit kernel - there is no MIPS64 MT processor
|
||||
* atm.
|
||||
*/
|
||||
sys mipsmt_sys_sched_setaffinity 3
|
||||
sys mipsmt_sys_sched_getaffinity 3
|
||||
|
|
|
@ -38,7 +38,7 @@ extern void drop_cop(unsigned long acop, struct mm_struct *mm);
|
|||
|
||||
/*
|
||||
* switch_mm is the entry point called from the architecture independent
|
||||
* code in kernel/sched.c
|
||||
* code in kernel/sched/core.c
|
||||
*/
|
||||
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
|
||||
struct task_struct *tsk)
|
||||
|
|
|
@ -225,7 +225,7 @@ extern int do_work_pending(struct pt_regs *regs, u32 flags);
|
|||
|
||||
/*
|
||||
* Return saved (kernel) PC of a blocked thread.
|
||||
* Only used in a printk() in kernel/sched.c, so don't work too hard.
|
||||
* Only used in a printk() in kernel/sched/core.c, so don't work too hard.
|
||||
*/
|
||||
#define thread_saved_pc(t) ((t)->thread.pc)
|
||||
|
||||
|
|
|
@ -442,7 +442,7 @@ void _KBacktraceIterator_init_current(struct KBacktraceIterator *kbt, ulong pc,
|
|||
regs_to_pt_regs(®s, pc, lr, sp, r52));
|
||||
}
|
||||
|
||||
/* This is called only from kernel/sched.c, with esp == NULL */
|
||||
/* This is called only from kernel/sched/core.c, with esp == NULL */
|
||||
void show_stack(struct task_struct *task, unsigned long *esp)
|
||||
{
|
||||
struct KBacktraceIterator kbt;
|
||||
|
|
|
@ -39,7 +39,7 @@ void show_trace(struct task_struct *task, unsigned long * stack)
|
|||
static const int kstack_depth_to_print = 24;
|
||||
|
||||
/* This recently started being used in arch-independent code too, as in
|
||||
* kernel/sched.c.*/
|
||||
* kernel/sched/core.c.*/
|
||||
void show_stack(struct task_struct *task, unsigned long *esp)
|
||||
{
|
||||
unsigned long *stack;
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
* (C) Copyright 2001 Linus Torvalds
|
||||
*
|
||||
* Atomic wait-for-completion handler data structures.
|
||||
* See kernel/sched.c for details.
|
||||
* See kernel/sched/core.c for details.
|
||||
*/
|
||||
|
||||
#include <linux/wait.h>
|
||||
|
|
|
@ -803,7 +803,7 @@ static inline void perf_restore_debug_store(void) { }
|
|||
#define perf_output_put(handle, x) perf_output_copy((handle), &(x), sizeof(x))
|
||||
|
||||
/*
|
||||
* This has to have a higher priority than migration_notifier in sched.c.
|
||||
* This has to have a higher priority than migration_notifier in sched/core.c.
|
||||
*/
|
||||
#define perf_cpu_notifier(fn) \
|
||||
do { \
|
||||
|
|
|
@ -67,7 +67,7 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
|
|||
|
||||
#else /* DEBUG_SPINLOCK */
|
||||
#define arch_spin_is_locked(lock) ((void)(lock), 0)
|
||||
/* for sched.c and kernel_lock.c: */
|
||||
/* for sched/core.c and kernel_lock.c: */
|
||||
# define arch_spin_lock(lock) do { barrier(); (void)(lock); } while (0)
|
||||
# define arch_spin_lock_flags(lock, flags) do { barrier(); (void)(lock); } while (0)
|
||||
# define arch_spin_unlock(lock) do { barrier(); (void)(lock); } while (0)
|
||||
|
|
|
@ -361,7 +361,7 @@ __SYSCALL(__NR_syslog, sys_syslog)
|
|||
#define __NR_ptrace 117
|
||||
__SYSCALL(__NR_ptrace, sys_ptrace)
|
||||
|
||||
/* kernel/sched.c */
|
||||
/* kernel/sched/core.c */
|
||||
#define __NR_sched_setparam 118
|
||||
__SYSCALL(__NR_sched_setparam, sys_sched_setparam)
|
||||
#define __NR_sched_setscheduler 119
|
||||
|
|
|
@ -540,7 +540,7 @@ static void update_domain_attr_tree(struct sched_domain_attr *dattr,
|
|||
* This function builds a partial partition of the systems CPUs
|
||||
* A 'partial partition' is a set of non-overlapping subsets whose
|
||||
* union is a subset of that set.
|
||||
* The output of this function needs to be passed to kernel/sched.c
|
||||
* The output of this function needs to be passed to kernel/sched/core.c
|
||||
* partition_sched_domains() routine, which will rebuild the scheduler's
|
||||
* load balancing domains (sched domains) as specified by that partial
|
||||
* partition.
|
||||
|
@ -569,7 +569,7 @@ static void update_domain_attr_tree(struct sched_domain_attr *dattr,
|
|||
* is a subset of one of these domains, while there are as
|
||||
* many such domains as possible, each as small as possible.
|
||||
* doms - Conversion of 'csa' to an array of cpumasks, for passing to
|
||||
* the kernel/sched.c routine partition_sched_domains() in a
|
||||
* the kernel/sched/core.c routine partition_sched_domains() in a
|
||||
* convenient format, that can be easily compared to the prior
|
||||
* value to determine what partition elements (sched domains)
|
||||
* were changed (added or removed.)
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
* Modification history kernel/time.c
|
||||
*
|
||||
* 1993-09-02 Philip Gladstone
|
||||
* Created file with time related functions from sched.c and adjtimex()
|
||||
* Created file with time related functions from sched/core.c and adjtimex()
|
||||
* 1993-10-08 Torsten Duwe
|
||||
* adjtime interface update and CMOS clock write code
|
||||
* 1995-08-13 Torsten Duwe
|
||||
|
|
|
@ -64,7 +64,7 @@ static inline struct worker *current_wq_worker(void)
|
|||
|
||||
/*
|
||||
* Scheduler hooks for concurrency managed workqueue. Only to be used from
|
||||
* sched.c and workqueue.c.
|
||||
* sched/core.c and workqueue.c.
|
||||
*/
|
||||
void wq_worker_waking_up(struct task_struct *task, int cpu);
|
||||
struct task_struct *wq_worker_sleeping(struct task_struct *task, int cpu);
|
||||
|
|
Loading…
Reference in New Issue
Block a user