forked from luck/tmp_suning_uos_patched
nvme: use blk_mq_start_hw_queues() in nvme_kill_queues()
Inside nvme_kill_queues(), we have to start hw queues for draining requests in sw queues, .dispatch list and requeue list, so use blk_mq_start_hw_queues() instead of blk_mq_start_stopped_hw_queues() which only run queues if queues are stopped, but the queues may have been started already, for example nvme_start_queues() is called in reset work function. blk_mq_start_hw_queues() run hw queues in current context, instead of running asynchronously like before. Given nvme_kill_queues() is run from either remove context or reset worker context, both are fine to run hw queue directly. And the mutex of namespaces_mutex isn't a problem too becasue nvme_start_freeze() runs hw queue in this way already. Cc: stable@vger.kernel.org Reported-by: Zhang Yi <yizhan@redhat.com> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
This commit is contained in:
parent
0544f5494a
commit
806f026f9b
|
@ -2437,7 +2437,13 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
|
|||
revalidate_disk(ns->disk);
|
||||
blk_set_queue_dying(ns->queue);
|
||||
blk_mq_abort_requeue_list(ns->queue);
|
||||
blk_mq_start_stopped_hw_queues(ns->queue, true);
|
||||
|
||||
/*
|
||||
* Forcibly start all queues to avoid having stuck requests.
|
||||
* Note that we must ensure the queues are not stopped
|
||||
* when the final removal happens.
|
||||
*/
|
||||
blk_mq_start_hw_queues(ns->queue);
|
||||
}
|
||||
mutex_unlock(&ctrl->namespaces_mutex);
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue
Block a user