forked from luck/tmp_suning_uos_patched
eeafd6489d
[ Upstream commit 2c8c89b95831f46a2fb31a8d0fef4601694023ce ] The paravit queued spinlock slow path adds itself to the queue then calls pv_wait to wait for the lock to become free. This is implemented by calling H_CONFER to donate cycles. When hcall tracing is enabled, this H_CONFER call can lead to a spin lock being taken in the tracing code, which will result in the lock to be taken again, which will also go to the slow path because it queues behind itself and so won't ever make progress. An example trace of a deadlock: __pv_queued_spin_lock_slowpath trace_clock_global ring_buffer_lock_reserve trace_event_buffer_lock_reserve trace_event_buffer_reserve trace_event_raw_event_hcall_exit __trace_hcall_exit plpar_hcall_norets_trace __pv_queued_spin_lock_slowpath trace_clock_global ring_buffer_lock_reserve trace_event_buffer_lock_reserve trace_event_buffer_reserve trace_event_raw_event_rcu_dyntick rcu_irq_exit irq_exit __do_irq call_do_irq do_IRQ hardware_interrupt_common_virt Fix this by introducing plpar_hcall_norets_notrace(), and using that to make SPLPAR virtual processor dispatching hcalls by the paravirt spinlock code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210508101455.1578318-2-npiggin@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org> |
||
---|---|---|
.. | ||
4xx | ||
8xx | ||
40x | ||
44x | ||
52xx | ||
82xx | ||
83xx | ||
85xx | ||
86xx | ||
512x | ||
amigaone | ||
cell | ||
chrp | ||
embedded6xx | ||
maple | ||
pasemi | ||
powermac | ||
powernv | ||
ps3 | ||
pseries | ||
fsl_uli1575.c | ||
Kconfig | ||
Kconfig.cputype | ||
Makefile |