forked from luck/tmp_suning_uos_patched
mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled
commit 4b42fb213678d2b6a9eeea92a9be200f23e49583 upstream. Previously, we noticed the one rpma example was failed[1] since commit36f30e486d
("IB/core: Improve ODP to use hmm_range_fault()"), where it will use ODP feature to do RDMA WRITE between fsdax files. After digging into the code, we found hmm_vma_handle_pte() will still return EFAULT even though all the its requesting flags has been fulfilled. That's because a DAX page will be marked as (_PAGE_SPECIAL | PAGE_DEVMAP) by pte_mkdevmap(). Link: https://github.com/pmem/rpma/issues/1142 [1] Link: https://lkml.kernel.org/r/20210830094232.203029-1-lizhijian@cn.fujitsu.com Fixes:4055062749
("mm/hmm: add missing call to hmm_pte_need_fault in HMM_PFN_SPECIAL handling") Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
parent
e1fa3b2b60
commit
ce75a6b399
5
mm/hmm.c
5
mm/hmm.c
|
@ -291,10 +291,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
|
|||
goto fault;
|
||||
|
||||
/*
|
||||
* Bypass devmap pte such as DAX page when all pfn requested
|
||||
* flags(pfn_req_flags) are fulfilled.
|
||||
* Since each architecture defines a struct page for the zero page, just
|
||||
* fall through and treat it like a normal page.
|
||||
*/
|
||||
if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
|
||||
if (pte_special(pte) && !pte_devmap(pte) &&
|
||||
!is_zero_pfn(pte_pfn(pte))) {
|
||||
if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
|
||||
pte_unmap(ptep);
|
||||
return -EFAULT;
|
||||
|
|
Loading…
Reference in New Issue
Block a user