The fallback path needs to enable interrupts like done for
the other page allocator calls. This was not necessary with
the alternate fast path since we handled irq enable/disable in
the slow path. The regular fastpath handles irq enable/disable
around calls to the slow path so we need to restore the proper
status before calling the page allocator from the slowpath.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
ahci: Add Marvell 6121 SATA support
pata_ali: use atapi_cmd_type() to determine cmd type instead of transfer size
ahci: implement skip_host_reset parameter
ahci: request all PCI BARs
devres: implement pcim_iomap_regions_request_all()
libata-acpi: improve dock event handling
oops and fs corruption; the latter can happen even on valid fs in case of oom.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
pata_ali was using qc->nbytes to determine whether a command is
data transfer type or not. As now qc->nbytes can be extended by
padding and draining buffers, these tests are not useful anymore.
Use atapi_cmd_type() instead.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Under certain circumstances (SSP turned off by the BIOS) and for
debugging purposes, skipping global controller reset is helpful. Add
a kernel parameter for it.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
ahci is often implemented with accompanying SFF compatible interface
and legacy IDE driver may attach to the legacy IO ports when the
controller is already claimed by ahci and vice-versa. This patch
makes ahci use pcim_iomap_regions_request_all() so that all IO regions
are claimed on attach.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Some drivers need to reserve all PCI BARs to prevent other drivers
misusing unoccupied BARs. pcim_iomap_regions_request_all() requests
all BARs and iomap specified BARs.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Improve ACPI hotplug handling such that dock event is handled properly.
* Register handlers for dock events.
* Directly detach device on EJECT_REQUEST instead of signaling hotplug
event. This prevents libata from accessing severed controller
and/or device.
* While at it, use named constants for ACPI events and move uevent
signaling inside host lock.
Original patch and testing by Holger Macht.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: Holger Macht <hmacht@suse.de>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
There is a race in virtio_net, dealing with disabling/enabling the callback.
I saw the following oops:
kernel BUG at /space/kvm/drivers/virtio/virtio_ring.c:218!
illegal operation: 0001 [#1] SMP
Modules linked in: sunrpc dm_mod
CPU: 2 Not tainted 2.6.25-rc1zlive-host-10623-gd358142-dirty #99
Process swapper (pid: 0, task: 000000000f85a610, ksp: 000000000f873c60)
Krnl PSW : 0404300180000000 00000000002b81a6 (vring_disable_cb+0x16/0x20)
R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:0 CC:3 PM:0 EA:3
Krnl GPRS: 0000000000000001 0000000000000001 0000000010005800 0000000000000001
000000000f3a0900 000000000f85a610 0000000000000000 0000000000000000
0000000000000000 000000000f870000 0000000000000000 0000000000001237
000000000f3a0920 000000000010ff74 00000000002846f6 000000000fa0bcd8
Krnl Code: 00000000002b819a: a7110001 tmll %r1,1
00000000002b819e: a7840004 brc 8,2b81a6
00000000002b81a2: a7f40001 brc 15,2b81a4
>00000000002b81a6: a51b0001 oill %r1,1
00000000002b81aa: 40102000 sth %r1,0(%r2)
00000000002b81ae: 07fe bcr 15,%r14
00000000002b81b0: eb7ff0380024 stmg %r7,%r15,56(%r15)
00000000002b81b6: a7f13e00 tmll %r15,15872
Call Trace:
([<000000000fa0bcd0>] 0xfa0bcd0)
[<00000000002b8350>] vring_interrupt+0x5c/0x6c
[<000000000010ab08>] do_extint+0xb8/0xf0
[<0000000000110716>] ext_no_vtime+0x16/0x1a
[<0000000000107e72>] cpu_idle+0x1c2/0x1e0
The problem can be triggered with a high amount of host->guest traffic.
I think its the following race:
poll says netif_rx_complete
poll calls enable_cb
enable_cb opens the interrupt mask
a new packet comes, an interrupt is triggered----\
enable_cb sees that there is more work |
enable_cb disables the interrupt |
. V
. interrupt is delivered
. skb_recv_done does atomic napi test, ok
some waiting disable_cb is called->check fails->bang!
.
poll would do napi check
poll would do disable_cb
The fix is to let enable_cb not disable the interrupt again, but expect the
caller to do the cleanup if it returns false. In that case, the interrupt is
only disabled, if the napi test_set_bit was successful.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (cleaned up doco)
Add a new poll_controller handler that the netpoll interface needs.
This enables netconsole logging from a kvm guest over the virtio
net interface.
Signed-off-by: Amit Shah <amitshah@gmx.net>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If the host asks for a huge target towards_target() can overflow, and
we up oops as we try to release more pages than we have. The simple
fix is to use a 64-bit value.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fix up so that the virtio_blk devices in sysfs link correctly to their
block device. This then allows them to be detected by hal, etc
Signed-off-by: Jeremy Katz <katzj@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
virtio-pci acquires its spin lock in an interrupt context so it's necessary
to use spin_lock_irqsave/restore variants. This patch fixes guest SMP when
using virtio devices in KVM.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If subbuf_pages was larger than the max number of pages the pipe
buffer will hold, subbuf_splice_actor() would happily go beyond
the array size.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
no longer working for some time.
A driver that had been marked as BROKEN for such a long time seems to be
unlikely to be revived in the forseeable future.
But if anyone wants to ever revive this driver, the code is still present in
the older kernel releases.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Acked-by: Alan Cox <alan@redhat.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
This fixes a problem on 64-bit with 4GB with ATI RS690 chipsets. It
makes sure the pcigart table is allocated in coherent memory for DMA operations.
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's worth remembering that all new bright ideas on how to make this command reader work properly and according to docs will probably fail :( Bring in some old code.
Also allow a larger SG-DMA download stride, and remove unnecessary waits for
command regulators pauses.
Signed-off-by: Dave Airlie <airlied@redhat.com>
The i915_vblank_swap() function schedules an automatic buffer swap
upon receipt of the vertical sync interrupt. Such an operation is
lengthy so it can't be allowed to happen in normal interrupt context,
thus the DRM implements this by scheduling the work in a kernel
softirq-scheduled tasklet. In order for the buffer swap to work
safely, the DRM's central lock must be taken, via a call to
drm_lock_take() located in drivers/char/drm/drm_irq.c within the
function drm_locked_tasklet_func(). The lock-taking logic uses a
non-interrupt-blocking spinlock to implement the manipulations needed
to take the lock. This semantic would be safe if all attempts to use
the spinlock only happen from process context. However this buffer
swap happens from softirq context which is really a form of interrupt
context. Thus we have an unsafe situation, in that
drm_locked_tasklet_func() can block on a spinlock already taken by a
thread in process context which will never get scheduled again because
of the blocked softirq tasklet. This wedges the kernel hard.
To trigger this bug, run a dual-head cloned mode configuration which
uses the i915 drm, then execute an opengl application which
synchronizes buffer swaps against the vertical sync interrupt. In my
testing, a lockup always results after running anywhere from 5 minutes
to an hour and a half. I believe dual-head is needed to really
trigger the problem because then the vertical sync interrupt handling
is no longer predictable (due to being interrupt-sourced from two
different heads running at different speeds). This raises the
probability of the tasklet trying to run while the userspace DRI is
doing things to the GPU (and manipulating the DRM lock).
The fix is to change the relevant spinlock semantics to be the
interrupt-blocking form. After this change I am no longer able to
trigger the lockup; the longest test run so far was 20 hours (test
stopped after that point).
Note: I have examined the places where this spinlock is being
employed; all are reasonably short bounded sequences and should be
suitable for interrupts being blocked without impacting overall kernel
interrupt response latency.
Signed-off-by: Mike Isely <isely@pobox.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
- move boot_args[] into the init section
- move $global$ into the read_mostly section
- fix the following two section mismatches:
WARNING: vmlinux.o(.text+0x9c): Section mismatch: reference to .init.text:start_kernel (between '$pgt_fill_loop' and '$is_pa20')
WARNING: vmlinux.o(.text+0xa0): Section mismatch: reference to .init.text:start_kernel (between '$pgt_fill_loop' and '$is_pa20')
Signed-off-by: Helge Deller <deller@gmx.de>
SIgned-off-by: Kyle McMartin <kyle@mcmartin.ca>
Commit a0c1e9073e added code to futex.c
to detect whether futex_atomic_cmpxchg_inatomic was implemented at run
time:
+ curval = cmpxchg_futex_value_locked(NULL, 0, 0);
+ if (curval == -EFAULT)
+ futex_cmpxchg_enabled = 1;
This is bogus on parisc, since page zero in kernel virtual space is the
gateway page for syscall entry, and should not be read from the kernel.
(That, and we really don't like the kernel faulting on its own address
space...)
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
When we show_regs, we obviously have a struct pt_regs of the calling
frame. Use these in show_stack so we don't have the entire bogus call trace
up to the show_stack call.
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
This patch adds the known pa8900 CPUs to the inventory list and removes
the Crestone Peak one which apparently never escaped into the wild.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
This patch moves the default parisc defconfig to
arch/parisc/configs/generic_defconfig where it belongs and selects it as
the default defconfig through KBUILD_DEFCONFIG.
Signed-off-by: Adrian Bunk <adrian.bunk@movial.fi>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Commit 721fdf3416 introduced a subtle bug
by accidently removing the "static" from iodc_dbuf. This resulted in, what
appeared to be, a trap without *current set to a task. Probably the result of
a trap in real mode while calling firmware.
Also do other misc clean ups. Since the only input from firmware is non
blocking, share iodc_dbuf between input and output, and spinlock the
only callers.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
Originally, show_stack was used in BUG() output. However, a recent commit
changed it to print register state (no idea what that's supposed to help,
really...) and parisc was missing a backtrace because of it.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
This essentially reverts commit 71fc47a9ad
("ACPI: basic initramfs DSDT override support"), because the code simply
isn't ready.
It did ugly things to the init sequence to populate the rootfs image
early, but that just ended up showing other problems with the whole
approach. The fact is, the VFS layer simply isn't initialized this
early, and the relevant ACPI code should either run much later, or this
shouldn't be done at all.
For 2.6.25, we'll just pick the latter option. We can revisit this
concept later if necessary.
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: Tilman Schmidt <tilman@imap.cc>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Renninger <trenn@suse.de>
Cc: Eric Piel <eric.piel@tremplin-utc.net>
Cc: Len Brown <len.brown@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Markus Gaugusch <dsdt@gaugusch.at>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use the existing calc_delta_mine() calculation for sched_slice(). This
saves a divide and simplifies the code because we share it with the
other /cfs_rq->load users.
It also improves code size:
text data bss dec hex filename
42659 2740 144 45543 b1e7 sched.o.before
42093 2740 144 44977 afb1 sched.o.after
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Fair sleepers need to scale their latency target down by runqueue
weight. Otherwise busy systems will gain ever larger sleep bonus.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Currently we schedule to the leftmost task in the runqueue. When the
runtimes are very short because of some server/client ping-pong,
especially in over-saturated workloads, this will cycle through all
tasks trashing the cache.
Reduce cache trashing by keeping dependent tasks together by running
newly woken tasks first. However, by not running the leftmost task first
we could starve tasks because the wakee can gain unlimited runtime.
Therefore we only run the wakee if its within a small
(wakeup_granularity) window of the leftmost task. This preserves
fairness, but does alternate server/client task groups.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Clear the cached inverse value when updating load. This is needed for
calc_delta_mine() to work correctly when using the rq load.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Current min_vruntime tracking is incorrect and will cause serious
problems when we don't run the leftmost task for some reason.
min_vruntime does two things; 1) it's used to determine a forward
direction when the u64 vruntime wraps, 2) it's used to track the
leftmost vruntime to position newly enqueued tasks from.
The current logic advances min_vruntime whenever the current task's
vruntime advance. Because the current task may pass the leftmost task
still waiting we're failing the second goal. This causes new tasks to be
placed too far ahead and thus penalizes their runtime.
Fix this by making min_vruntime the min_vruntime of the waiting tasks by
tracking it in enqueue/dequeue, and compare against current's vruntime
to obtain the absolute minimum when placing new tasks.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>