+ * shared (i.e., non-CoCo VMs).
+ */
static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
{
- return IS_ENABLED(CONFIG_KVM_GMEM) &&
- kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
+ struct kvm_memory_slot *slot;
+
+ if (!IS_ENABLED(CONFIG_KVM_GMEM))
+ return false;
+
+ slot = gfn_to_memslot(kvm, gfn);
+ if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) {
+ /*
+ * For now, memslots only support in-place shared memory if the
+ * host is allowed to mmap memory (i.e., non-Coco VMs).
+ */
Not accurate: there is no in-place conversion support in this series,
because there is no such itnerface. So the reason is that all memory is
shared for there VM types?
True that there's no in-place conversion yet.
In this patch series, guest_memfd memslots support shared memory only
for specific VM types (on x86, that would be KVM_X86_DEFAULT_VM and
KVM_X86_SW_PROTECTED_VMs).
How about this wording:
Without conversion support, if the guest_memfd memslot supports shared
memory, all memory must be used as not private (implicitly shared).
LGTM
+ return false;
+ }
+
+ return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
}
#else
static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 2f499021df66..fe0245335c96 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -388,6 +388,23 @@ static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
return 0;
}
+
+bool kvm_gmem_memslot_supports_shared(const struct kvm_memory_slot *slot)
+{
+ struct file *file;
+ bool ret;
+
+ file = kvm_gmem_get_file((struct kvm_memory_slot *)slot);
+ if (!file)
+ return false;
+
+ ret = kvm_gmem_supports_shared(file_inode(file));
+
+ fput(file);
+ return ret;
Would it make sense to cache that information in the memslot, to avoid
the get/put?
We could simply cache when creating the memslot I guess.
When I wrote it I was assuming that to ensure correctness we should
check with guest memfd, like what if someone closed the gmem file in the
middle of the fault path?
But I guess after the discussion at the last call, since the faulting
process is long and racy, if this check passed and we go to guest memfd
and the file was closed, it would just fail so I guess caching is fine.
Yes, that would be my assumption. I mean, we also msut make sure that if
the user does something stupid like that, that we won't trigger other
undesired code paths (like, suddenly the guest_memfd being !shared).
As an alternative ... could we simple get/put when managing the memslot?
What does a simple get/put mean here?
s/simple/simply/
So when we create the memslot, we'd perform the get, and when we destroy
the memslot, we'd do the put.
Just an idea.
--
Cheers,
David / dhildenb