The compression code had some checks to make sure we were only
compressing bytes inside of i_size, but it wasn't catching every
case. To make things worse, some incorrect math about the number
of bytes remaining would make it try to compress more pages than the
file really had.
The fix used here is to fall back to the non-compression code in this
case, which does all the proper cleanup of delalloc and other accounting.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
With selinux on we end up calling __btrfs_setxattr when we create an inode,
which calls btrfs_start_transaction(). The problem is we've already called
that in btrfs_new_inode, and in btrfs_start_transaction we end up doing a
wait_current_trans(). If btrfs-transaction has started committing it will wait
for all handles to finish, while the other process is waiting for the
transaction to commit. This is fixed by using btrfs_join_transaction, which
won't wait for the transaction to commit. Thanks,
Signed-off-by: Josef Bacik <jbacik@redhat.com>
Before this patch, new files/dirs would ignore the SGID bit on their
parent directory and always be owned by the creating user's uid/gid.
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Every transaction in btrfs creates a new snapshot, and then schedules the
snapshot from the last transaction for deletion. Snapshot deletion
works by walking down the btree and dropping the reference counts
on each btree block during the walk.
If if a given leaf or node has a reference count greater than one,
the reference count is decremented and the subtree pointed to by that
node is ignored.
If the reference count is one, walking continues down into that node
or leaf, and the references of everything it points to are decremented.
The old code would try to work in small pieces, walking down the tree
until it found the lowest leaf or node to free and then returning. This
was very friendly to the rest of the FS because it didn't have a huge
impact on other operations.
But it wouldn't always keep up with the rate that new commits added new
snapshots for deletion, and it wasn't very optimal for the extent
allocation tree because it wasn't finding leaves that were close together
on disk and processing them at the same time.
This changes things to walk down to a level 1 node and then process it
in bulk. All the leaf pointers are sorted and the leaves are dropped
in order based on their extent number.
The extent allocation tree and commit code are now fast enough for
this kind of bulk processing to work without slowing the rest of the FS
down. Overall it does less IO and is better able to keep up with
snapshot deletions under high load.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Before metadata is written to disk, it is updated to reflect that writeout
has begun. Once this update is done, the block must be cow'd before it
can be modified again.
This update was originally synchronized by using a per-fs spinlock. Today
the buffers for the metadata blocks are locked before writeout begins,
and everyone that tests the flag has the buffer locked as well.
So, the per-fs spinlock (called hash_lock for no good reason) is no
longer required.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
extent_io.c has debugging code to report and free leaked extent_state
and extent_buffer objects at rmmod time. This helps track down
leaks and it saves you from rebooting just to properly remove the
kmem_cache object.
But, the code runs under a fairly expensive spinlock and the checks to
see if it is currently enabled are not entirely consistent. Some use
#ifdef and some #if.
This changes everything to #if and disables the leak checking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
When a block goes through cow, we update the reference counts of
everything that block points to. The internal pointers of the block
can be in just about any order, and it is likely to have clusters of
things that are close together and clusters of things that are not.
To help reduce the seeks that come with updating all of these reference
counts, sort them by byte number before actual updates are done.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Tracing shows the delay between when an async thread goes to sleep
and when more work is added is often very short. This commit adds
a little bit of delay and extra checking to the code right before
we schedule out.
It allows more work to be added to the worker
without requiring notifications from other procs.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Add call to LSM security initialization and save
resulting security xattr for new inodes.
Add xattr support to symlink inode ops.
Set inode->i_op for existing special files.
Signed-off-by: jim owens <jowens@hp.com>
This patch adds a menu entry to kconfig to enable acls for btrfs.
This allows you to enable FS_POSIX_ACL at kernel compile time.
(updated by Jeff Mahoney to make the changes in fs/btrfs/Kconfig instead)
Signed-off-by: Christian Hesse <mail@earthworm.de>
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
The async bio submission thread was missing some bios that were
added after it had decided there was no work left to do.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
The removed version with the loop registers saved on the stack was
originally intended to workaround the missing toolchain support for
LoopReg Clobbers.
Since our toolchain now supports these there is no point in keeping this
workaround. And since we don't touch LoopRegs anymore we're no longer
subject for ANOMALY_05000312.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
Avoid possible overflow during 32*32->32 multiplies.
Reported-by: Marco Reppenhagen <marco.reppenhagen@auerswald.de>
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
use a do...while loop rather than a for loop to get slightly better
optimization and to avoid gcc "may be used uninitialized" warnings ...
we know that the [id]cplb_nr_bounds variables will never be 0, so this
is OK
Signed-off-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
Enable null pointer checking for ICPLBs. The code was there but for
some reason I had commented it out at some stage during development.
Should restrict this to 1K since atomic ops start there.
Signed-off-by: Bernd Schmidt <bernds_cb1@t-online.de>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
- unable to single step over emuexcpt instruction
- gdbproxy goes into infinite loop when doing gdb does "next" over
"emuexcpt"
Don't decrement PC after software breakpoint.
Signed-off-by: Jie Zhang <jie.zhang@analog.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
bss_l2 section is garbage when the data in this section is used by
_bfin_relocate_l1_mem, so move the zero out function ahead.
Signed-off-by: Graf Yang <graf.yang@analog.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
base SIC_IWR# programming on whether the MMR exists
rather than having to maintain another list of processors
Signed-off-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
Drop CONFIG_I2C_BOARDINFO ifdefs as the common i2c header handles this
already by stubbing things out
Signed-off-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
Avoid conditional branch instructions during carry bit additions.
Special thanks to Bernd.
Simplify: Use ((len + proto) << 8) like every other __LITTLE_ENDIAN__ machine
Cc: Bernd Schmidt <bernds_cb1@t-online.de>
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
[Mike Frysinger <vapier.adi@gmail.com>:
- setup P_DEFAULT_BOOT_SPI_CS for every arch based on
the default bootrom behavior and convert all our boards
to it
- revert previous anomaly change ... bf51x is not affected
by anomaly 05000353]
Signed-off-by: Sonic Zhang <sonic.zhang@analog.com>
Signed-off-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
remove subscribers-only marking as the list is
automatically & silently moderated for people
Signed-off-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
merge more of the bf54x and !bf54x gpio code together to
cut down on #ifdef mess
Signed-off-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
According to the documentation gpio_free should only be called from task
context only. To make this more explicit add a might sleep to all
implementations.
This patch changes the gpio_free implementations for the blackfin
architecture.
Signed-off-by: Uwe Kleine-Koenig <ukleinek@strlen.de>
Cc: David Brownell <david-b@pacbell.net>
Acked-by: Bryan Wu <cooloney@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Make sure we don't accidently re-enable interrupts if we are being
called in atomic context
Signed-off-by: Robin Getz <rgetz@blackfin.uclinux.org>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
when requesting a GPIO for the first time, the POLAR setting is not
set to a sane state. this can lead to indeterminate behavior that
cannot be resolved without an explicit write to the Blackfin port POLAR
register.
when requesting a GPIO for the first time via gpio_request(), the POLAR
setting for the GPIO in question should be set to sane state. this
should occur if the GPIO has not been allocated in any other way.
some examples:
- when doing something like "request_irq(); gpio_request();" on the
same GPIO, the POLAR setting should not be reset.
- when doing "gpio_request(); gpio_request();" on the same GPIO, the
POLAR setting should be reset only the first time and not the second.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
On BF561 EBIU_SDGCTL bit 31 controls the SDRAM external data
path width, typically set 0 for a 32-bit bus width. On other
Blackfin derivatives this bit should be set by default.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
Dmacopy failed in BF537-STAMP when copy from SRAM to SDRAM and kernel
will reboot automatically.
Fixing by doing a SSYNC before mucking with DMA registers
Signed-off-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>