"Edgecombe, Rick P" <rick.p.edgecombe@xxxxxxxxx> writes: > On Wed, 2025-05-14 at 16:42 -0700, Ackerley Tng wrote: >> + >> +static pgoff_t kvm_gmem_compute_invalidate_bound(struct inode *inode, >> + pgoff_t bound, bool start) >> +{ >> + size_t nr_pages; >> + void *priv; >> + >> + if (!kvm_gmem_has_custom_allocator(inode)) > > General comment - It's a bit unfortunate how kvm_gmem_has_custom_allocator() is > checked all over the place across this series. There are only two allocators > after this, right? So one is implemented with callbacks presumably designed to > fit other allocators, and one has special case logic in guest_memfd.c. > > Did you consider designing struct guestmem_allocator_operations so that it could > encapsulate the special logic for both the existing and new > allocators? I did, yes. I believe it is definitely possible to make standard 4K pages become another allocator too. I would love to clean this up. Not sure if that will be a new series after this one, or part of this one though. > If it > didn't work well, could we expect that a next allocator would actually fit > struct guestmem_allocator_operations? > This was definitely designed to support allocators beyond guestmem_hugetlb, though I won't promise that it will be a perfect fit for future allocators. This is internal to the kernel and this interface can be updated for future allocators though. >> + return bound; >> + >> + priv = kvm_gmem_allocator_private(inode); >> + nr_pages = kvm_gmem_allocator_ops(inode)->nr_pages_in_folio(priv); >> + >> + if (start) >> + return round_down(bound, nr_pages); >> + else >> + return round_up(bound, nr_pages); >> +} >> + >> +static pgoff_t kvm_gmem_compute_invalidate_start(struct inode *inode, >> + pgoff_t bound) >> +{ >> + return kvm_gmem_compute_invalidate_bound(inode, bound, true); >> +} >> + >> +static pgoff_t kvm_gmem_compute_invalidate_end(struct inode *inode, >> + pgoff_t bound) >> +{ >> + return kvm_gmem_compute_invalidate_bound(inode, bound, false); >> +} >> +