Merge the contents of hwsampler_files.c into
arch/s390/oprofile/init.c.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Fixes the following section mismatch:
Section mismatch in reference from the variable hws_cpu_notifier to the function .cpuinit.text:hws_cpu_callback()
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
This patch is a rework of the hwsampler oprofile implementation that
has been applied recently. Now there are less non-architectural
changes. The only changes are:
* introduction of oprofile_add_ext_hw_sample(), and
* removal of section attributes of oprofile_timer_init/_exit().
To setup hwsampler for oprofile we need to modify start()/stop()
callbacks and additional hwsampler control files in oprofilefs. We do
not reinitialize the timer or hwsampler mode by restarting calling
init/exit() anymore, instead hwsampler_running is used to switch the
mode directly in oprofile_hwsampler_start/_stop(). For locking reasons
there is also hwsampler_file that reflects the value in oprofilefs.
The overall diffstat of the oprofile s390 hwsampler implemenation
shows the low impact to non-architectural code:
arch/Kconfig | 3 +
arch/s390/Kconfig | 1 +
arch/s390/oprofile/Makefile | 2 +-
arch/s390/oprofile/hwsampler.c | 1256 ++++++++++++++++++++++++++++++++++
arch/s390/oprofile/hwsampler.h | 113 +++
arch/s390/oprofile/hwsampler_files.c | 162 +++++
arch/s390/oprofile/init.c | 6 +-
drivers/oprofile/cpu_buffer.c | 24 +-
drivers/oprofile/timer_int.c | 4 +-
include/linux/oprofile.h | 7 +
10 files changed, 1567 insertions(+), 11 deletions(-)
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
OProfile is enhanced to export all files for controlling System z's
hardware sampling, and to invoke hwsampler exported functions to
initialize and use System z's hardware sampling.
The patch invokes hwsampler_setup() during oprofile init and exports
following hwsampler files under oprofilefs if hwsampler's setup
succeeded:
A new directory for hardware sampling based files
/dev/oprofile/hwsampling/
The userland daemon must explicitly write to the following files
to disable (or enable) hardware based sampling
/dev/oprofile/hwsampling/hwsampler
to modify the actual sampling rate
/dev/oprofile/hwsampling/hw_interval
to modify the amount of sampling memory (measured in 4K pages)
/dev/oprofile/hwsampling/hw_sdbt_blocks
The following files are read only and show
the possible minimum sampling rate
/dev/oprofile/hwsampling/hw_min_interval
the possible maximum sampling rate
/dev/oprofile/hwsampling/hw_max_interval
The patch splits the oprofile_timer_[init/exit] function so that it
can be also called through user context (oprofilefs) to avoid kernel
oops.
Applied with following changes:
* whitespace changes in Makefile and timer_int.c
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Maran Pakkirisamy <maranp@linux.vnet.ibm.com>
Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
This adds support for hardware based sampling on System z processors
(models z10 and up).
System z's hardware sampling is described in detail in:
SA23-2260-01 "The Load-Program-Parameter and CPU-Measurement Facilities"
The patch introduces
- support for System z's hardware sampler in OProfile's kernel module
- it adds functions that control all hardware sampling related
operations as:
- checking if hardware sampling feature is available, i.e.: on
System z models z10 and up, in LPAR mode only, and authorised
during LPAR activation
- allocating memory for the hardware sampling feature
- starting/stopping hardware sampling
All functions required to start and stop hardware sampling have to be
invoked by the oprofile kernel module as provided by the other patches
of this patch set.
In case hardware based sampling cannot be setup standard timer based
sampling is used by OProfile.
Applied with following changes:
* enable compilation in Makefile
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Maran Pakkirisamy <maranp@linux.vnet.ibm.com>
Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
* 'kvm-updates/2.6.38' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (142 commits)
KVM: Initialize fpu state in preemptible context
KVM: VMX: when entering real mode align segment base to 16 bytes
KVM: MMU: handle 'map_writable' in set_spte() function
KVM: MMU: audit: allow audit more guests at the same time
KVM: Fetch guest cr3 from hardware on demand
KVM: Replace reads of vcpu->arch.cr3 by an accessor
KVM: MMU: only write protect mappings at pagetable level
KVM: VMX: Correct asm constraint in vmcs_load()/vmcs_clear()
KVM: MMU: Initialize base_role for tdp mmus
KVM: VMX: Optimize atomic EFER load
KVM: VMX: Add definitions for more vm entry/exit control bits
KVM: SVM: copy instruction bytes from VMCB
KVM: SVM: implement enhanced INVLPG intercept
KVM: SVM: enhance mov DR intercept handler
KVM: SVM: enhance MOV CR intercept handler
KVM: SVM: add new SVM feature bit names
KVM: cleanup emulate_instruction
KVM: move complete_insn_gp() into x86.c
KVM: x86: fix CR8 handling
KVM guest: Fix kvm clock initialization when it's configured out
...
IA64 support forces us to abstract the allocation of the kvm structure.
But instead of mixing this up with arch-specific initialization and
doing the same on destruction, split both steps. This allows to move
generic destruction calls into generic code.
It also fixes error clean-up on failures of kvm_create_vm for IA64.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Randomize ELF_ET_DYN_BASE, which is used when loading position
independent executables.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Randomize heap address like other architectures do already.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Helper function which tells us if a task is running in ESA mode.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Randomize the lower bits of the stack address like x86 and powerpc.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Shuffle code around so it looks more like x86 and powerpc.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Historically 64 bit processes use the legacy address layout. However
there is no reason why 64 bit processes shouldn't benefit from the
flexible mmap layout advantages.
Therefore just enable it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The vdso object is currently always mapped with mm->mmap_base used as
requested address. In case of flexible mmap layout this means it gets
mapped above mmap_base and therefore potentially stealing a bit of
address space that is reserved for the stack.
In case of flexible mmap layout the object should be mapped below
mmap base. For legacy mmap layout above.
To fix this just don't request any specific address and let the mmap
code figure out an address that fits.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reduce minimum gap between stack and mmap_base to 32MB. That way there
is a bit more space for heap and mmap for tight 31 bit address spaces.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Consider stack address randomization when calulating mmap_base for
flexible mmap layout . Because of address randomization the stack
address can be up to 8MB lower than STACK_TOP.
When calculating mmap_base this isn't taken into account, which could
lead to the case that the gap between the real stack top and mmap_base
is lower than what ulimit specifies for the maximum stack size.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1436 commits)
cassini: Use local-mac-address prom property for Cassini MAC address
net: remove the duplicate #ifdef __KERNEL__
net: bridge: check the length of skb after nf_bridge_maybe_copy_header()
netconsole: clarify stopping message
netconsole: don't announce stopping if nothing happened
cnic: Fix the type field in SPQ messages
netfilter: fix export secctx error handling
netfilter: fix the race when initializing nf_ct_expect_hash_rnd
ipv4: IP defragmentation must be ECN aware
net: r6040: Return proper error for r6040_init_one
dcb: use after free in dcb_flushapp()
dcb: unlock on error in dcbnl_ieee_get()
net: ixp4xx_eth: Return proper error for eth_init_one
include/linux/if_ether.h: Add #define ETH_P_LINK_CTL for HPNA and wlan local tunnel
net: add POLLPRI to sock_def_readable()
af_unix: Avoid socket->sk NULL OOPS in stream connect security hooks.
net_sched: pfifo_head_drop problem
mac80211: remove stray extern
mac80211: implement off-channel TX using hw r-o-c offload
mac80211: implement hardware offload for remain-on-channel
...
When the seqfile /proc/cpuinfo gets accesses for each possible cpu
loops_per_jiffy gets recalculated. However its value is only needed
on first access.
In addition loops_per_jiffy should be recalculated when the machine
reports a capability change.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use get_online_cpus() instead of preempt_disable() to make sure cpus
don't go offline while accessing their per cpu data.
The preempt_disable() stuff is old code which was used before
get_online_cpus() was available.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of messages that indicate if a cpu went online or offline.
There is nothing special about this anymore and these messages might
flood the kernel log buffer which makes debugging harder since more
important messages might be overwritten.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This enables the spinning mutex feature on s390 by removing
HAVE_DEFAULT_NO_SPIN_MUTEXES from arch/s390/Kconfig.
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The spinning mutex implementation uses cpu_relax() in busy loops as a
compiler barrier. Depending on the architecture, cpu_relax() may do more
than needed in this specific mutex spin loops. On System z we also give
up the time slice of the virtual cpu in cpu_relax(), which prevents
effective spinning on the mutex.
This patch replaces cpu_relax() in the spinning mutex code with
arch_mutex_cpu_relax(), which can be defined by each architecture that
selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so
this patch should not affect other architectures than System z for now.
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1290437256.7455.4.camel@thinkpad>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Call init_idle() which (re-)initializes the idle task structure before
it gets used on a new cpu.
That way we can also get rid of the odd preempt_enable_no_resched()
call we have in the cpu offline path within cpu_idle(). That call
prevented preempt count imbalances between cpu hotplug operations.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Delay idle task creation until a cpu gets set online instead of
creating them for all possible cpus at system startup.
For one cpu system this should safe more than 1 MB.
On my debug system with lots of debug stuff enabled this saves 2 MB.
Same as on x86.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Normal I/O operations through the DASD device driver give only access
to the data fields of an ECKD device even for track based I/O.
This patch extends the DASD device driver to give access to whole
ECKD tracks including count, key and data fields.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If a DASD device has been reserved by a Linux system, and later
this reservation is ‘stolen’ by a second system by means of an
unconditional reserve, then the first system receives a
notification about this fact. With this patch such an event can
be either ignored, as before, or it can be used to let the device
fail all I/O request, so that the device will not block anymore.
Signed-off-by: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Until now machine checks for the swapper process of the IPL cpu are just
implicitly (and more or less accidently) enabled when the first time the
idle process goes into idle state and loads an enabled wait psw.
Before that machine checks are disabled.
So let's enable them explicitly in trap_init() so we have a well defined
time when machine checks are enabled.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Make the code in the 31 bit entry.S code as similar as possible to the
64 bit version in entry64.S. That makes it easier to add new code to
the first level interrupt handler that affects both 31 and 64 bit kernels.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support to accumulate the number of 64K-bytes blocks all paths
to a device at least support for a transport command.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
arch_needs_cpu() gets always executed on the current cpu. Therefore
the cpu parameter can be ignored it is possible to use __get_cpu_var()
instead of per_cpu() to access the per_cpu variable, which will
generate better code.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Introduce a scan treshold for the qdio outbound queues. By setting the
threshold the driver can tell qdio after how much used SBALs qdio
should schedule the outbound tasklet that scans the queue for finished
SBALs. The threshold is specific by the drivers because a
Hipersockets device is much faster in utilizing outbound buffers than a
ZFCP or OSA device.
The default values after how many used SBALs the tasklet should run are:
OSA: > 31 SBALs
Hipersockets: > 7 SBALs
zfcp: > 55 SBALs
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Currently the buffer for diagnose data is allocated in the open function
of the debugfs file and is released in the close function. This has the
drawback that a user (root) can pin that memory by not closing the file.
This patch moves the buffer allocation to the read function. The buffer is
automatically released after the buffer is copied to userspace.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of register/unregister_early_external_interrupt() and clean up
the code while at it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use an early init call to initialize pfault. That way it is possible to
use the register_external_interrupt() instead of the early variant.
No need to enable pfault any earlier since it has only effect if user
space processes are running.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for AP Bus I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Holger Dengler <hd@linux.vnet.ibm.com>
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for CTC I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for CLAW I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for LCS I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for VMUR I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for ccw based tape I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for 3270 I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for 3215 I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for DASD I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>