forked from luck/tmp_suning_uos_patched
f5cefe9a52
[ Upstream commit 49b0b6ffe20c5344f4173f3436298782a08da4f2 ]
There's a potential deadlock case when remove the vsock device or
process the RESET event:
vsock_for_each_connected_socket:
spin_lock_bh(&vsock_table_lock) ----------- (1)
...
virtio_vsock_reset_sock:
lock_sock(sk) --------------------- (2)
...
spin_unlock_bh(&vsock_table_lock)
lock_sock() may do initiative schedule when the 'sk' is owned by
other thread at the same time, we would receivce a warning message
that "scheduling while atomic".
Even worse, if the next task (selected by the scheduler) try to
release a 'sk', it need to request vsock_table_lock and the deadlock
occur, cause the system into softlockup state.
Call trace:
queued_spin_lock_slowpath
vsock_remove_bound
vsock_remove_sock
virtio_transport_release
__vsock_release
vsock_release
__sock_release
sock_close
__fput
____fput
So we should not require sk_lock in this case, just like the behavior
in vhost_vsock or vmci.
Fixes:
|
||
---|---|---|
.. | ||
af_vsock_tap.c | ||
af_vsock.c | ||
diag.c | ||
hyperv_transport.c | ||
Kconfig | ||
Makefile | ||
virtio_transport_common.c | ||
virtio_transport.c | ||
vmci_transport_notify_qstate.c | ||
vmci_transport_notify.c | ||
vmci_transport_notify.h | ||
vmci_transport.c | ||
vmci_transport.h | ||
vsock_addr.c | ||
vsock_loopback.c |