forked from luck/tmp_suning_uos_patched
KVM: MMU: avoid fast page fault fixing mmio page fault
Currently, fast page fault incorrectly tries to fix mmio page fault when the generation number is invalid (spte.gen != kvm.gen). It then returns to guest to retry the fault since it sees the last spte is nonpresent. This causes an infinite loop. Since fast page fault only works for direct mmu, the issue exists when 1) tdp is enabled. It is only triggered only on AMD host since on Intel host the mmio page fault is recognized as ept-misconfig whose handler call fault-page path with error_code = 0 2) guest paging is disabled. Under this case, the issue is hardly discovered since paging disable is short-lived and the sptes will be invalid after memslot changed for 150 times Fix it by filtering out MMIO page faults in page_fault_can_be_fast. Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de> Tested-by: Markus Trippelsdorf <markus@trippelsdorf.de> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
parent
ad81f0545e
commit
1c118b8226
|
@ -2810,6 +2810,13 @@ static bool handle_abnormal_pfn(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn,
|
|||
|
||||
static bool page_fault_can_be_fast(struct kvm_vcpu *vcpu, u32 error_code)
|
||||
{
|
||||
/*
|
||||
* Do not fix the mmio spte with invalid generation number which
|
||||
* need to be updated by slow page fault path.
|
||||
*/
|
||||
if (unlikely(error_code & PFERR_RSVD_MASK))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* #PF can be fast only if the shadow page table is present and it
|
||||
* is caused by write-protect, that means we just need change the
|
||||
|
|
Loading…
Reference in New Issue
Block a user