When guest_memfd is used for both shared/private memory, converting pages to shared may require kvm_arch_gmem_invalidate() to be issued to return the pages to an architecturally-defined "shared" state if the pages were previously allocated and transitioned to a private state via kvm_arch_gmem_prepare(). Handle this by issuing the appropriate kvm_arch_gmem_invalidate() calls when converting ranges in the filemap to a shared state. Signed-off-by: Michael Roth <michael.roth@xxxxxxx> --- virt/kvm/guest_memfd.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b77cdccd340e..f27e1f3962bb 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -203,6 +203,28 @@ static int kvm_gmem_shareability_apply(struct inode *inode, struct maple_tree *mt; mt = &kvm_gmem_private(inode)->shareability; + + /* + * If a folio has been allocated then it was possibly in a private + * state prior to conversion. Ensure arch invalidations are issued + * to return the folio to a normal/shared state as defined by the + * architecture before tracking it as shared in gmem. + */ + if (m == SHAREABILITY_ALL) { + pgoff_t idx; + + for (idx = work->start; idx < work->start + work->nr_pages; idx++) { + struct folio *folio = filemap_lock_folio(inode->i_mapping, idx); + + if (!IS_ERR(folio)) { + kvm_arch_gmem_invalidate(folio_pfn(folio), + folio_pfn(folio) + folio_nr_pages(folio)); + folio_unlock(folio); + folio_put(folio); + } + } + } + return kvm_gmem_shareability_store(mt, work->start, work->nr_pages, m); } -- 2.25.1