Use snprintf() here as well so we won't have to deal with this again.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This patch fixed bugzilla #12965:
https://bugzilla.kernel.org/show_bug.cgi?id=12965
The original code contains some inproper use of sprintf
function where a buffer is used both as input string
as well as output string. It should remember the written
bytes in the previous and use that as the offset for
later writing. Also replace sprintf with snprintf.
Signed-off-by: Chen Liu <chenliu@asset.uwaterloo.ca>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* 'for-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu, x86: Add arch-specific this_cpu_cmpxchg_double() support
percpu: Generic support for this_cpu_cmpxchg_double()
alpha: use L1_CACHE_BYTES for cacheline size in the linker script
percpu: align percpu readmostly subsection to cacheline
Fix up trivial conflict in arch/x86/kernel/vmlinux.lds.S due to the
percpu alignment having changed ("x86: Reduce back the alignment of the
per-CPU data section")
Disable ftrace during kexec. Same as on x86/powerpc.
ac4414e "powerpc/kdump: Disable ftrace during kexec".
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
task_show_regs used to be a debugging aid in the early bringup days
of Linux on s390. /proc/<pid>/status is a world readable file, it
is not a good idea to show the registers of a process. The only
correct fix is to remove task_show_regs.
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently percpu readmostly subsection may share cachelines with other
percpu subsections which may result in unnecessary cacheline bounce
and performance degradation.
This patch adds @cacheline parameter to PERCPU() and PERCPU_VADDR()
linker macros, makes each arch linker scripts specify its cacheline
size and use it to align percpu subsections.
This is based on Shaohua's x86 only patch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Shaohua Li <shaohua.li@intel.com>
Randomize ELF_ET_DYN_BASE, which is used when loading position
independent executables.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Randomize heap address like other architectures do already.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Randomize the lower bits of the stack address like x86 and powerpc.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The vdso object is currently always mapped with mm->mmap_base used as
requested address. In case of flexible mmap layout this means it gets
mapped above mmap_base and therefore potentially stealing a bit of
address space that is reserved for the stack.
In case of flexible mmap layout the object should be mapped below
mmap base. For legacy mmap layout above.
To fix this just don't request any specific address and let the mmap
code figure out an address that fits.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When the seqfile /proc/cpuinfo gets accesses for each possible cpu
loops_per_jiffy gets recalculated. However its value is only needed
on first access.
In addition loops_per_jiffy should be recalculated when the machine
reports a capability change.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use get_online_cpus() instead of preempt_disable() to make sure cpus
don't go offline while accessing their per cpu data.
The preempt_disable() stuff is old code which was used before
get_online_cpus() was available.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of messages that indicate if a cpu went online or offline.
There is nothing special about this anymore and these messages might
flood the kernel log buffer which makes debugging harder since more
important messages might be overwritten.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Call init_idle() which (re-)initializes the idle task structure before
it gets used on a new cpu.
That way we can also get rid of the odd preempt_enable_no_resched()
call we have in the cpu offline path within cpu_idle(). That call
prevented preempt count imbalances between cpu hotplug operations.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Delay idle task creation until a cpu gets set online instead of
creating them for all possible cpus at system startup.
For one cpu system this should safe more than 1 MB.
On my debug system with lots of debug stuff enabled this saves 2 MB.
Same as on x86.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Until now machine checks for the swapper process of the IPL cpu are just
implicitly (and more or less accidently) enabled when the first time the
idle process goes into idle state and loads an enabled wait psw.
Before that machine checks are disabled.
So let's enable them explicitly in trap_init() so we have a well defined
time when machine checks are enabled.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Make the code in the 31 bit entry.S code as similar as possible to the
64 bit version in entry64.S. That makes it easier to add new code to
the first level interrupt handler that affects both 31 and 64 bit kernels.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of register/unregister_early_external_interrupt() and clean up
the code while at it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use an early init call to initialize pfault. That way it is possible to
use the register_external_interrupt() instead of the early variant.
No need to enable pfault any earlier since it has only effect if user
space processes are running.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for AP Bus I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Holger Dengler <hd@linux.vnet.ibm.com>
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for CTC I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for CLAW I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for LCS I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for VMUR I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for ccw based tape I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for 3270 I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for 3215 I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for DASD I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Count traditional qdio interrupts and adapter interrupts for qdio
in the interrupt statistics.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Up to now /proc/interrupts only has statistics for external and i/o
interrupts but doesn't split up them any further.
This patch adds a line for every single interrupt source so that it
is possible to easier tell what the machine is/was doing.
Part of the output now looks like this;
CPU0 CPU2 CPU4
EXT: 3898 4232 2305
I/O: 782 315 245
CLK: 1029 1964 727 [EXT] Clock Comparator
IPI: 2868 2267 1577 [EXT] Signal Processor
TMR: 0 0 0 [EXT] CPU Timer
TAL: 0 0 0 [EXT] Timing Alert
PFL: 0 0 0 [EXT] Pseudo Page Fault
[...]
NMI: 0 1 1 [NMI] Machine Checks
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add kprobes annotations to get the massive 'probe kernel.function("*") {}'
stress test working.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Restructure the kprobe breakpoint handler function. Add comments to
make it more comprehensible and add a sanity check for re-entering
kprobes.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Register %r14 and %r15 are already stored in jprobe_saved_regs, no need
to store them a second time in jprobe_saved_r14 / jprobe_saved_r15.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The s390 architecture can execute code on kmalloc/vmalloc memory.
No need for the __ARCH_WANT_KPROBES_INSN_SLOT detour.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Replace set_current_kprobe/reset_current_kprobe/save_previous_kprobe/
restore_previous_kprobe with a simpler scheme push_kprobe/pop_kprobe.
The mini kprobes stack can store up to two active kprobes.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Determine instruction fixup details in resume_execution, no need to do
it beforehand. Remove fixup, ilen and reg from arch_specific_insn.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Move the definition of the helper structure ins_replace_args to the
only place where it is used and drop the old member as it is not needed.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The saved interrupt mask and the saved control registers are only
relevant while single stepping is set up. A secondary kprobe while
kprobe single stepping is active may not occur. That makes is safe
to remove the save and restore of kprobe_saved_imask / kprobe_save_ctl
from save_previous_kprobe and restore_previous_kprobe.
Move all single step related code to two functions, enable_singlestep
and disable_singlestep.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove special case of a kprobe on a breakpoint while a relocated
instruction is single stepped. The only instruction that may cause
a fault while kprobe single stepping is active is the relocated
instruction. There is no kprobe on the instruction slot retrieved
with get_insn_slot().
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This fixes the same problem as described in the patch "nohz: fix
printk_needs_cpu() return value on offline cpus" for the arch_needs_cpu()
primitive:
arch_needs_cpu() may return 1 if called on offline cpus. When a cpu gets
offlined it schedules the idle process which, before killing its own cpu,
will call tick_nohz_stop_sched_tick().
That function in turn will call arch_needs_cpu() in order to check if the
local tick can be disabled. On offline cpus this function should naturally
return 0 since regardless if the tick gets disabled or not the cpu will be
dead short after. That is besides the fact that __cpu_disable() should already
have made sure that no interrupts on the offlined cpu will be delivered anyway.
In this case it prevents tick_nohz_stop_sched_tick() to call
select_nohz_load_balancer(). No idea if that really is a problem. However what
made me debug this is that on 2.6.32 the function get_nohz_load_balancer() is
used within __mod_timer() to select a cpu on which a timer gets enqueued.
If arch_needs_cpu() returns 1 then the nohz_load_balancer cpu doesn't get
updated when a cpu gets offlined. It may contain the cpu number of an offline
cpu. In turn timers get enqueued on an offline cpu and not very surprisingly
they never expire and cause system hangs.
This has been observed 2.6.32 kernels. On current kernels __mod_timer() uses
get_nohz_timer_target() which doesn't have that problem. However there might
be other problems because of the too early exit tick_nohz_stop_sched_tick()
in case a cpu goes offline.
This specific bug was indrocuded with 3c5d92a0 "nohz: Introduce
arch_needs_cpu".
In this case a cpu hotplug notifier is used to fix the issue in order to keep
the normal/fast path small. All we need to do is to clear the condition that
makes arch_needs_cpu() return 1 since it is just a performance improvement
which is supposed to keep the local tick running for a short period if a cpu
goes idle. Nothing special needs to be done except for clearing the condition.
Cc: stable@kernel.org
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
On each machine check all registers are revalidated. The save area for
the clock comparator however only contains the upper most seven bytes
of the former contents, if valid.
Therefore the machine check handler uses a store clock instruction to
get the current time and writes that to the clock comparator register
which in turn will generate an immediate timer interrupt.
However within the lowcore the expected time of the next timer
interrupt is stored. If the interrupt happens before that time the
handler won't be called. In turn the clock comparator won't be
reprogrammed and therefore the interrupt condition stays pending which
causes an interrupt loop until the expected time is reached.
On NOHZ machines this can result in unresponsive machines since the
time of the next expected interrupted can be a couple of days in the
future.
To fix this just revalidate the clock comparator register with the
expected value.
In addition the special handling for udelay must be changed as well.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The big kernel lock has been removed from all these files at some point,
leaving only the #include.
Remove this too as a cleanup.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Analog to git commit 737480a0d5
fix the return address of subsequent kretprobes when multiple
kretprobes are set on the same function.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Execute the kprobe exception and fault handler with interrupts disabled.
To disable the interrupts only while a single step is in progress is not
good enough, a kprobe from interrupt context while another kprobe is
handled can confuse the internal house keeping.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>