For 64-bit BAR[i] only pci_dev->resource[i] is valid, ->resource[i+1]
slot is unused and contains zeroes in all fields.
So when we update a PCI BAR, all we need is just to check that we're
going to update a _valid_ resource.
Also make sure to write high bits - use "x >> 16 >> 16" (rather than the
simpler ">> 32") to avoid warnings on 32-bit architectures where we're
not going to have any high bits.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch removes the unused bt_dump() function and it also removes
its BT_DMP macro. It also unexports the hci_dev_get(), hci_send_cmd()
and hci_si_event() functions.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
There's no need to check for NULL before calling kfree() on a pointer.
Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
The Kensington Bluetooth USB adapter is based on a Broadcom chip
with the HID proxy support. To initialize these kind of devices
correctly it is necessary to send HCI_Reset as the first command.
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
My patch in commit fa72b903f7 incorrectly
removed blk_queue_tag->real_max_depth.
The original resize implementation was incorrect in the following
points.
* actual allocation size of tag_index was shorter than real_max_size,
but assumed to be of the same size, possibly causing memory access
beyond the allocated area.
* bits in tag_map between max_deptn and real_max_depth were
initialized to 1's, making the tags permanently reserved.
In an attempt to fix above two bugs, I had removed allocation optimization
in init_tag_map and real_max_size. Tag map/index were allocated and freed
immediately during resize.
Unfortunately, I wasn't considering that tag map/index can be resized
dynamically with tags beyond new_depth active. This led to accessing
freed area after shrinking tags and led to the following bug reporting
thread on linux-scsi.
http://marc.theaimsgroup.com/?l=linux-scsi&m=112319898111885&w=2
To fix the problem, I've revived real_max_depth without allocation
optimization in init_tag_map, and Andrew Vasquez confirmed that the
problem was fixed. As Jens is not going to be available for a week, he
asked me to make sure that this patch reaches you.
http://marc.theaimsgroup.com/?l=linux-scsi&m=112325778530886&w=2
Also, a comment was added to make sure that real_max_size is needed for
dynamic shrinking.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch includes support for the new Infineon Trusted Platform Module
SLB 9635 TT 1.2 and does further include ACPI-support for both chip
versions (SLD 9630 TT 1.1 and SLB9635 TT 1.2). Since the ioports and
configuration registers are not correctly set on some machines, the
configuration is now done via PNPACPI, which reads out the correct values
out of the DSDT-table. Note that you have to have CONFIG_PNP,
CONFIG_ACPI_BUS and CONFIG_PNPACPI enabled to run this driver (assuming
that mainboards including a TPM do have the need for ACPI anyway).
Signed-off-by: Marcel Selhorst <selhorst@crypto.rub.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Since the beginning of July my Opteron box was randomly crashing and
being rebooted by hardware watchdog. Today it finally did it in front
of me, and this patch will hopefully fix it.
The problem is that at the end of June (the 28th, to be exact: commit
47f176fdaf, "[PATCH] Using msleep()
instead of HZ") rtc_get_rtc_time was converted to use msleep() instead
of busy waiting. But rtc_get_rtc_time is used by hpet_rtc_interrupt,
and scheduling is not allowed during interrupt. So I'm reverting this
part of original change, replacing msleep() back with busy loop.
The original code was busy waiting for up to 20ms, but on my hardware in
the worst case update-in-progress bit was asserted for at most 363
passes through loop (on 2GHz dual Opteron), much less than even one
jiffie, not even talking about 20ms. So I changed code to just wait
only as long as necessary. Otherwise when RTC was set to generate
8192Hz timer, it stopped doing anything for 20ms (160 pulses were
skipped!) from time to time, and this is rather suboptimal as far as I
can tell.
Signed-off-by: Petr Vandrovec <vandrove@vc.cvut.cz>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When recently addressing remarks by Alexey Dobriyan about
the isp116x-hcd, I introduced a bug in the driver. Please
apply the attached patch to fix it.
Signed-off-by: Olav Kongas <ok@artecdesign.ee>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch has a one line oops fix, plus related cleanups.
- The bugfix uses microframe scheduling data given to the hardware to
test "is this a periodic QH", rather than testing for nonzero period.
(Prevents an oops by providing the correct answer.)
- The cleanup going along with the patch should make it clearer what's
going on whenever those bitfields are accessed.
The bug came about when, around January, two new kinds of EHCI interrupt
scheduling operation were added, involving both the high speed (24 KBytes
per millisec) and low/full speed (1-64 bytes per millisec) microframe
scheduling. A driver for the Edirol UA-1000 Audio Capture Unit ran into
the oops; it used one of the newly supported high speed modes.
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
In yenta_socket, we default to using the resource setting of the CardBus
bridge. However, this is a PCI-bus-centric view of resources and thus needs
to be converted to generic resources first. Therefore, add a call to
pcibios_bus_to_resource() call in between. This function is a mere wrapper on
x86 and friends, however on some others it already exists, is added in this
patch (alpha, arm, ppc, ppc64) or still needs to be provided (parisc -- where
is its pcibios_resource_to_bus() ?).
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Some PCI devices (e.g. 3c905B, 3c556B) lose all configuration
(including BARs) when transitioning from D3hot->D0. This leaves such
a device in an inaccessible state. The patch below causes the BARs
to be restored when enabling such a device, so that its driver will
be able to access it.
The patch also adds pci_restore_bars as a new global symbol, and adds a
correpsonding EXPORT_SYMBOL_GPL for that.
Some firmware (e.g. Thinkpad T21) leaves devices in D3hot after a
(re)boot. Most drivers call pci_enable_device very early, so devices
left in D3hot that lose configuration during the D3hot->D0 transition
will be inaccessible to their drivers.
Drivers could be modified to account for this, but it would
be difficult to know which drivers need modification. This is
especially true since often many devices are covered by the same
driver. It likely would be necessary to replicate code across dozens
of drivers.
The patch below should trigger only when transitioning from D3hot->D0
(or at boot), and only for devices that have the "no soft reset" bit
cleared in the PM control register. I believe it is safe to include
this patch as part of the PCI infrastructure.
The cleanest implementation of pci_restore_bars was to call
pci_update_resource. Unfortunately, that does not currently exist
for the sparc64 architecture. The patch below includes a null
implemenation of pci_update_resource for sparc64.
Some have expressed interest in making general use of the the
pci_restore_bars function, so that has been exported to GPL licensed
modules.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This code was never designed to handle more than one instance of do_work()
running at once.
Signed-Off-By: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The recent change to never ignore the bitmap, revealed that the bitmap isn't
begin flushed properly when an array is stopped.
We call bitmap_daemon_work three times as there is a three-stage pipeline for
flushing updates to the bitmap file.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Firstly, R1BIO_Degraded was being set in a number of places in the resync
code, but is never used there, so get rid of those settings.
Then: When doing a resync, we want to clear the bit in the bitmap iff the
array will be non-degraded when the sync has completed. However the current
code would clear the bitmap if the array was non-degraded when the resync
*started*, which obviously isn't right (it is for 'resync' but not for
'recovery' - i.e. rebuilding a failed drive).
This patch calculated 'still_degraded' and uses the to tell bitmap_start_sync
whether this sync should clear the corresponding bit.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The code currently will ignore the bitmap if the array seem to be in-sync.
This is wrong if the array is degraded, and probably wrong anyway. If the
bitmap says some chunks are not in in-sync, and the superblock says everything
IS in sync, then something is clearly wrong, and it is safer to trust the
bitmap.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Until the bitmap code was added,
modprobe md
would load the md module. But now the md module is called 'md-mod', so we
really need an alias for backwards comparability.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
no_overlay bttv parameter implemented to fix OOPS on some PCI chipsets
(like some VIA) with these behaviors:
1) If pci_quicks does identify the chip as having troubles to
handle PCI2PCI transfers, no_overlay defaults to 1. The user may force
it to 0, to reenable (not recommended).
2) For newer chipsets not blacklisted, no_overlay=1 is provided as a
workaround until PCI chipset included on /drivers/pci/quirks.c
Thanks to Bodo Eggert <7eggert@gmx.de>
Signed-off-by: Michael Krufky <mkrufky@m1k.net>
Signed-off-by: Mauro Carvalho Chehab <mchehab@brturbo.com.br>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch fixes oops caused by ide interfaces not on pci. pcibus_to_node
causes the kernel to crash otherwise. Patch also adds a BUG_ON to check if
hwif is NULL.
Signed-off-by: Christoph Lameter <christoph@lameter.com>
Signed-off-by: Shai Fultheim <shai@scalex86.org>
Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Cc: Andi Kleen <ak@muc.de>
Cc: Bartlomiej Zolnierkiewicz <B.Zolnierkiewicz@elka.pw.edu.pl>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Several people noticed we dropped quite a bit on benchmark figures.
OK, it was my fault but unfortunately I discovered I ran out of brown
paper bags a while ago and forgot to reorder them.
The issue is that a construct introduced in the conversion of the
driver to use the transport class keyed off whether the block request
was tagged or not. However, the aic7xxx driver doesn't properly set
up the block layer TCQ (it uses the wrong API), so the driver now
things all requests are untagged and we keep it to a queue depth of a
single element. Oops.
The fix is to use the correct TCQ API.
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
ACPI now uses kmalloc(...,GPF_ATOMIC) during suspend/resume.
http://bugzilla.kernel.org/show_bug.cgi?id=3469
Signed-off-by: David Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
For 2.6.12 behaviour, this (EXPERIMENTAL) driver
should not be built.
Update the driver source with latest from Luming.
Signed-off-by: Luming Yu <luming.yu@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Burst mode isn't ready for prime time,
but can be enabled for test via "ec_burst=1"
Signed-off-by: Luming Yu <luming.yu@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Patch from Ian Campbell
On PXA255 there is no way to disable the watchdog. Turning off OIER[E3]
as suggested in the existing comment does not work.
I posted a note to the ARM mailing list a little while ago asking for
opinions from people using SA1100. There was one reponse from Nico who
believes that the SA1100 is the same as the PXA255 in this respect.
You also asked me to involve the watchdog maintainer which I tried to
do but didn't hear anything back. There are only a couple of other
drivers which can't stop the watchdog and there seems to be no
consistancy regarding printing an error etc. I decided to print
something since that matches the case for all the other drivers when
NOWAYOUT is turned on.
Also, I changed the device .name to "watchdog" like most of the other
watchdogs. udev uses it as the device name (by default) and spaces etc.
get in the way.
Superceded 2833/1 because 2.6.13-rc4 caused rejects.
Signed-off-by: Ian Campbell <icampbell@arcom.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch disables the PCI Interrupt Link refernece counts,
which should not co-exist with the 2.6.12 irq_router.resume
method or else a double acpi_pci_link_set() could result
on resume.
Signed-off-by: David Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
We have increased PCIBIOS_MIN_IO to 0x4000, but still want
motherboard resources to be allocated properly. So we need
to state 0x1000 (according to the comment) limit explicitely.
Signed-off-by: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The reason we have PCIBIOS_MIN_IO and PCIBIOS_MIN_CARDBUS_IO is because
we want to protect badly documented motherboard PCI resources and thus
don't want to allocate new resources in low IO/MEM space.
However, if we have already discovered a PCI bridge with a specified
resource base, that should override that decision.
This change will allow us to move the "careful" region upwards without
resulting in problems allocating resources in low mappings. This was
brought on by us having allocated a bus resource at 0x1000, conflicting
with a undocumented VAIO Sony PI resources.
CFQ will currently stall when using write barriers and the default
max_depth setting of 1, since we artificially need a depth of 2 when
pre-pending the first flush. So never deny the barrier request going to
the device.
This is a regression since 2.6.12, it was found in SUSE testing.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The aic7xxx can support Data Group transfers at periods > 12.5, so
eliminate that restriction. Additionally wide is a requirement for DT
so ensure wide is set if users request DT.
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
Rebuild the aic7xxx firmware doesn't work anymore after this change
which appeared int 2.6.13-rc1:
[SCSI] aic7xxx/aic79xx: remove useless byte order macro cruft
Two files did not include byteorder.h, resulting in aic dying with a panic
"Unknown opcode encountered in seq program"
This fixes it for me.
Signed-off-by: Olaf Hering <olh@suse.de>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Recent changes (well, dating from 12 July) have broken cardbus on my
powerbook: I get 3 messages saying "no resource of type xxx available,
trying to continue", and if I plug in my wireless card, it complains
that there are no resources allocated to the card. This all worked in
2.6.12.
Looking at the code in yenta_socket.c, function yenta_allocate_res,
it's obvious what is wrong: if we get to line 639 (i.e. there wasn't a
usable preassigned resource), we will always flow through to line 668,
which is the printk that I was seeing, even if a resource was
successfully allocated. It looks to me as though there should be a
return statement after the two config_writel's in each of the 3
branches of the if statements, so that the function returns after
successfully setting up the resource.
The patch below adds these return statements, and with this patch,
cardbus works on my powerbook once again.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Removing the SCSI tape module results in an oops in class_device_destroy if
any devices are present. The patch at the end of this message fixes the bug
by moving class_destroy() later in exit_st() so that the class still exists
when devices are removed. (The bug is old but class_simple_device_remove() did
nothing when the class did not exist.)
The patch also fixes a "class leak" in init_st() error path.
I would like to get this into 2.6.13 but it may be too late?
Signed-off-by: Kai Makisara <kai.makisara@kolumbus.fi>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
I am resubmitting the 2.6 kernel patch for the Version 7.12.02 ips driver.
I have eliminated a couple of inappropriate changes pointed out by Arjan.
Signed-off-by: Jack Hammer <jack_hammer@adaptec.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
This patch does correct radio chip autodetection to avoid misdetecting
mt20xx microtune as tea5767 chip.
Signed-off-by: Mauro Carvalho Chehab <mchehab@brturbo.com.br>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Martin Drab found that he could get aacraid timeouts with high load on his
controller / disk drive combinations. After some experimentation Mark
Salyzyn has come up with a patch to reduce the default max_sectors to
something that will keep the controller from being overloaded and will
eliminate the timeout issues.
Signed-off-by: Mark Haverkamp <markh@osdl.org>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Acked-by: Mark Salyzyn <mark_salyzyn@adaptec.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
cs89x0 talks a lot at boot. Seems like debug leftover. This patch
downgrades printks to KERN_DEBUG. While we're at it, make these messages a
bit less obscure.
Cc: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The default resync_max_sector is set to "mddev->size << 1". If the
raid-personality-module updates mddev->size, it must update
resync_max_sectors too.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>