Previous changes reveal some obvious cruft.
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
We also only ever receive one value of the signalg, so let's not pretend
otherwise
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
We designed the krb5 context import without completely understanding the
context. Now it's clear that there are a number of fields that we ignore,
or that we depend on having one single value.
In particular, we only support one value of signalg currently; so let's
check the signalg field in the downcall (in case we decide there's
something else we could support here eventually), but ignore it otherwise.
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
This updates the spkm3 code to bring it up to date with our current
understanding of the spkm3 spec.
In doing so, we're changing the downcall format used by gssd in the spkm3 case,
which will cause an incompatilibity with old userland spkm3 support. Since the
old code a) didn't implement the protocol correctly, and b) was never
distributed except in the form of some experimental patches from the citi web
site, we're assuming this is OK.
We do detect the old downcall format and print warning (and fail). We also
include a version number in the new downcall format, to be used in the
future in case any further change is required.
In some more detail:
- fix integrity support
- removed dependency on NIDs. instead OIDs are used
- known OID values for algorithms added.
- fixed some context fields and types
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Since process_xdr_buf() is useful outside of the kerberos-specific code, we
move it to net/sunrpc/xdr.c, export it, and rename it in keeping with xdr_*
naming convention of xdr.c.
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
This code is never called from interrupt context; it's always run by either
a user thread or rpciod. So KM_SKB_SUNRPC_DATA is inappropriate here.
Thanks to Aimé Le Rouzic for capturing an oops which showed the kernel
taking an interrupt while we were in this piece of code, resulting in a
nested kmap_atomic(.,KM_SKB_SUNRPC_DATA) call from
xdr_partial_copy_from_skb().
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Dumping all this data to the logs is wasteful (even when debugging is turned
off), and creates too much output to be useful when it's turned on.
Fix a minor style bug or two while we're at it.
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Don't wake up bind waiters if a task finds that another task is already
trying to bind.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
We're really accounting for the same page twice now: once in
generic_writepages(), and once in nfs_scan_dirty().
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
There is now no reason to account for the dirty pages in the NFS code,
since the VM code will now do it for us via __set_page_dirty_nobuffers(),
and set_page_writeback().
We still need to keep the accounting of stable writes, though.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
invalidate_inode_pages2_range() will clear the PG_dirty bit before calling
try_to_release_page().
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
This will ensure that we can call set_page_writeback() from within
nfs_writepage(), which is always called with the page lock set.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
We will want to allow nfs_writepage() to distinguish between pages that
have been marked as dirty by the VM, and those that have been marked as
dirty by nfs_updatepage().
In the former case, the entire page will want to be written out, and so any
requests that were pending need to be flushed out first.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Maintaining two parallel ways of doing synchronous writes is rather
pointless. This patch gets rid of the legacy nfs_writepage_sync(), and
replaces it with the faster asynchronous writes.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
We always ensure that the nfs_open_context holds a reference to the dentry,
so the test in nfs_writepage() for whether or not the inode is referenced
is redundant.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
This will allow fast lookup of the nfs_page from the struct page instead of
having to search the radix tree.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Change the location where the rpc_xprt structure is allocated so each
transport implementation can allocate a private area from the same
chunk of memory.
Note also that xprt->ops->destroy, rather than xprt_destroy, is now
responsible for freeing rpc_xprt when the transport is destroyed.
Test plan:
Connectathon.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Move the xid field in the rpc_xprt structure to be in the same cache line
as the reserve_lock, since these are used at the same time.
Test plan:
None.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Remove use of the Big Kernel Lock around indirect calls to
nfs3_proc_readlink and nfs4_proc_readlink, both of which
basically call rpc_call_sync.
Signed-off-by: Frank Filz <ffilz@us.ibm.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Remove use of the Big Kernel Lock around calls to rpc_call_sync.
Signed-off-by: Frank Filz <ffilz@us.ibm.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Remove use of the Big Kernel Lock around calls to rpc_execute.
Signed-off-by: Frank Filz <ffilz@us.ibm.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
All internal RPC client operations should no longer depend on the BKL,
however lockd and NFS callbacks may still require it.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Currently nfs_sync_inode_wait() will fail to loop correctly when we call
nfs_sync_inode_wait with the FLUSH_INVALIDATE argument.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
We must always call ->read_done() before we truncate the page data, or
decide to flag an error. The reasons are that
in NFSv2, ->read_done() is where the eof flag gets set.
in NFSv3/v4 ->read_done() handles EJUKEBOX-type errors, and
v4 state recovery.
However, we need to mark the pages as uptodate before we deal with short
read errors, since we may need to modify the nfs_read_data arguments.
We therefore split the current nfs_readpage_result() into two parts:
nfs_readpage_result(), which calls ->read_done() etc, and
nfs_readpage_retry(), which subsequently handles short reads.
Note: Removing the code that retries in case of a short read also fixes a
bug in nfs_direct_read_result(), which used to return a corrupted number of
bytes.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
When trying to open a file with the O_EXCL flag over NFS on a server that does
not support exclusive mode, the file does not open. The reason,
rpc_call_sync returns a errno number, and not the nfs error number. I fixed
it by changing the status check in nfs3proc.c. Either this is how it should
be fixed, or rpc_call_sync should be fixed to return the NFS error.
Signed-off-by: Andy Ryan <genanr@allantgroup.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Use RCU to ensure that we can safely call rpc_finish_wakeup after we've
called __rpc_do_wake_up_task. If not, there is a theoretical race, in which
the rpc_task finishes executing, and gets freed first.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
The sunrpc scheduler contains a race condition that can let an RPC
task end up being neither running nor on any wait queue. The race takes
place between rpc_make_runnable (called from rpc_wake_up_task) and
__rpc_execute under the following condition:
First __rpc_execute calls tk_action which puts the task on some wait
queue. The task is dequeued by another process before __rpc_execute
continues its execution. While executing rpc_make_runnable exactly after
setting the task `running' bit and before clearing the `queued' bit
__rpc_execute picks up execution, clears `running' and subsequently
both functions fall through, both under the false assumption somebody
else took the job.
Swapping rpc_test_and_set_running with rpc_clear_queued in
rpc_make_runnable fixes that hole. This introduces another possible
race condition that can be handled by checking for `queued' after
setting the `running' bit.
Bug noticed on a 4-way x86_64 system under XEN with an NFSv4 server
on the same physical machine, apparently one of the few ways to hit
this race condition at all.
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Cc: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Christophe Saout <christophe@saout.de>
Signed-off-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Simple patch to add the new PCIe version of the 29320 card.
Signed-off: Mark Salyzyn <Mark_Salyzyn@adaptec.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
The original wait loop may be much longer than intended time.
Use more accurate timer_after for it. Also adjust wait value to
avoid unnecessary long waiting.
Signed-off-by: Ed Lin <ed.lin@promise.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>