forked from luck/tmp_suning_uos_patched
workqueue: cond_resched() after processing each work item
If !PREEMPT, a kworker running work items back to back can hog CPU. This becomes dangerous when a self-requeueing work item which is waiting for something to happen races against stop_machine. Such self-requeueing work item would requeue itself indefinitely hogging the kworker and CPU it's running on while stop_machine would wait for that CPU to enter stop_machine while preventing anything else from happening on all other CPUs. The two would deadlock. Jamie Liu reports that this deadlock scenario exists around scsi_requeue_run_queue() and libata port multiplier support, where one port may exclude command processing from other ports. With the right timing, scsi_requeue_run_queue() can end up requeueing itself trying to execute an IO which is asked to be retried while another device has an exclusive access, which in turn can't make forward progress due to stop_machine. Fix it by invoking cond_resched() after executing each work item. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Jamie Liu <jamieliu@google.com> References: http://thread.gmane.org/gmane.linux.kernel/1552567 Cc: stable@vger.kernel.org -- kernel/workqueue.c | 9 +++++++++ 1 file changed, 9 insertions(+)
This commit is contained in:
parent
c95389b4cd
commit
b22ce2785d
@ -2201,6 +2201,15 @@ __acquires(&pool->lock)
|
||||
dump_stack();
|
||||
}
|
||||
|
||||
/*
|
||||
* The following prevents a kworker from hogging CPU on !PREEMPT
|
||||
* kernels, where a requeueing work item waiting for something to
|
||||
* happen could deadlock with stop_machine as such work item could
|
||||
* indefinitely requeue itself while all other CPUs are trapped in
|
||||
* stop_machine.
|
||||
*/
|
||||
cond_resched();
|
||||
|
||||
spin_lock_irq(&pool->lock);
|
||||
|
||||
/* clear cpu intensive status */
|
||||
|
Loading…
Reference in New Issue
Block a user