On Fri, Mar 28, 2025, Fuad Tabba wrote: > @@ -389,22 +381,211 @@ static void kvm_gmem_init_mount(void) > } > > #ifdef CONFIG_KVM_GMEM_SHARED_MEM > -static bool kvm_gmem_offset_is_shared(struct file *file, pgoff_t index) > +/* > + * An enum of the valid folio sharing states: > + * Bit 0: set if not shared with the guest (guest cannot fault it in) > + * Bit 1: set if not shared with the host (host cannot fault it in) > + */ > +enum folio_shareability { > + KVM_GMEM_ALL_SHARED = 0b00, /* Shared with the host and the guest. */ > + KVM_GMEM_GUEST_SHARED = 0b10, /* Shared only with the guest. */ > + KVM_GMEM_NONE_SHARED = 0b11, /* Not shared, transient state. */ Absolutely not. The proper way to define bitmasks is to use BIT(xxx). Based on past discussions, I suspect you went this route so that the most common value is '0' to avoid extra, but that should be an implementation detail buried deep in the low level xarray handling, not a The name is also bizarre and confusing. To map memory into the guest as private, it needs to be in KVM_GMEM_GUEST_SHARED. That's completely unworkable. Of course, it's not at all obvious that you're actually trying to create a bitmask. The above looks like an inverted bitmask, but then it's used as if the values don't matter. return (r == KVM_GMEM_ALL_SHARED || r == KVM_GMEM_GUEST_SHARED); Given that I can't think of a sane use case for allowing guest_memfd to be mapped into the host but not the guest (modulo temporary demand paging scenarios), I think all we need is: KVM_GMEM_SHARED = BIT(0), KVM_GMEM_INVALID = BIT(1), As for optimizing xarray storage, assuming it's actually a bitmask, simply let KVM specify which bits to invert when storing/loading to/from the xarray so that KVM can optimize storage for the most common value (which is presumably KVM_GEM_SHARED on arm64?). If KVM_GMEM_SHARED is the desired "default", invert bit 0, otherwise dont. If for some reason we get to a state where the default value is multiple bits, the inversion trick still works. E.g. if KVM_GMEM_SHARED where a composite value, then invert bits 0 and 1. The polarity shenanigans should be easy to hide in two low level macros/helpers. > +/* > + * Returns true if the folio is shared with the host and the guest. This is a superfluous comment. Simple predicates should be self-explanatory based on function name alone. > + * > + * Must be called with the offsets_lock lock held. Drop these types of comments and document through code, i.e. via lockdep assertions (which you already have). > + */ > +static bool kvm_gmem_offset_is_shared(struct inode *inode, pgoff_t index) > +{ > + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > + unsigned long r; > + > + lockdep_assert_held(offsets_lock); > > - /* For now, VMs that support shared memory share all their memory. */ > - return kvm_arch_gmem_supports_shared_mem(gmem->kvm); > + r = xa_to_value(xa_load(shared_offsets, index)); > + > + return r == KVM_GMEM_ALL_SHARED; > +} > + > +/* > + * Returns true if the folio is shared with the guest (not transitioning). > + * > + * Must be called with the offsets_lock lock held. See above. > static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) This should be something like kvm_gmem_fault_shared() make it abundantly clear what's being done. Because it too me a few looks to realize this is faulting memory into host userspace, not into the guest.