On Mon, May 5, 2025 at 10:17 PM Vishal Annapurve <vannapurve@xxxxxxxxxx> wrote: > > On Mon, May 5, 2025 at 3:57 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > ... > > > And not worry about lpage_infor for the time being, until we actually do > > > support larger pages. > > > > I don't want to completely punt on this, because if it gets messy, then I want > > to know now and have a solution in hand, not find out N months from now. > > > > That said, I don't expect it to be difficult. What we could punt on is > > performance of the lookups, which is the real reason KVM maintains the rather > > expensive disallow_lpage array. > > > > And that said, memslots can only bind to one guest_memfd instance, so I don't > > immediately see any reason why the guest_memfd ioctl() couldn't process the > > slots that are bound to it. I.e. why not update KVM_LPAGE_MIXED_FLAG from the > > guest_memfd ioctl() instead of from KVM_SET_MEMORY_ATTRIBUTES? > > I am missing the point here to update KVM_LPAGE_MIXED_FLAG for the > scenarios where in-place memory conversion will be supported with > guest_memfd. As guest_memfd support for hugepages comes with the > design that hugepages can't have mixed attributes. i.e. max_order > returned by get_pfn will always have the same attributes for the folio > range. > > Is your suggestion around using guest_memfd ioctl() to also toggle > memory attributes for the scenarios where guest_memfd instance doesn't > have in-place memory conversion feature enabled? Reading more into your response, I guess your suggestion is about covering different usecases present today and new usecases which may land in future, that rely on kvm_lpage_info for faster lookup. If so, then it should be easy to modify guest_memfd ioctl to update kvm_lpage_info as you suggested.