forked from luck/tmp_suning_uos_patched
RPC: killing RPC tasks races fixed
RPC task RPC_TASK_QUEUED bit is set must be checked before trying to wake up task rpc_killall_tasks() because task->tk_waitqueue can not be set (equal to NULL). Also, as Trond Myklebust mentioned, such approach (instead of checking tk_waitqueue to NULL) allows us to "optimise away the call to rpc_wake_up_queued_task() altogether for those tasks that aren't queued". Here is an example of dereferencing of tk_waitqueue equal to NULL: CPU 0 CPU 1 CPU 2 -------------------- --------------------- -------------------------- nfs4_run_open_task rpc_run_task rpc_execute rpc_set_active rpc_make_runnable (waiting) rpc_async_schedule nfs4_open_prepare nfs_wait_on_sequence nfs_umount_begin rpc_killall_tasks rpc_wake_up_task rpc_wake_up_queued_task spin_lock(tk_waitqueue == NULL) BUG() rpc_sleep_on spin_lock(&q->lock) __rpc_sleep_on task->tk_waitqueue = q Signed-off-by: Stanislav Kinsbursky <skinsbursky@openvz.org> Cc: stable@kernel.org Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
This commit is contained in:
parent
ba3c578de2
commit
8e26de238f
|
@ -436,7 +436,9 @@ void rpc_killall_tasks(struct rpc_clnt *clnt)
|
||||||
if (!(rovr->tk_flags & RPC_TASK_KILLED)) {
|
if (!(rovr->tk_flags & RPC_TASK_KILLED)) {
|
||||||
rovr->tk_flags |= RPC_TASK_KILLED;
|
rovr->tk_flags |= RPC_TASK_KILLED;
|
||||||
rpc_exit(rovr, -EIO);
|
rpc_exit(rovr, -EIO);
|
||||||
rpc_wake_up_queued_task(rovr->tk_waitqueue, rovr);
|
if (RPC_IS_QUEUED(rovr))
|
||||||
|
rpc_wake_up_queued_task(rovr->tk_waitqueue,
|
||||||
|
rovr);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
spin_unlock(&clnt->cl_lock);
|
spin_unlock(&clnt->cl_lock);
|
||||||
|
|
Loading…
Reference in New Issue
Block a user