On Wed, 2025-05-14 at 16:42 -0700, Ackerley Tng wrote: > + > +static pgoff_t kvm_gmem_compute_invalidate_bound(struct inode *inode, > + pgoff_t bound, bool start) > +{ > + size_t nr_pages; > + void *priv; > + > + if (!kvm_gmem_has_custom_allocator(inode)) General comment - It's a bit unfortunate how kvm_gmem_has_custom_allocator() is checked all over the place across this series. There are only two allocators after this, right? So one is implemented with callbacks presumably designed to fit other allocators, and one has special case logic in guest_memfd.c. Did you consider designing struct guestmem_allocator_operations so that it could encapsulate the special logic for both the existing and new allocators? If it didn't work well, could we expect that a next allocator would actually fit struct guestmem_allocator_operations? > + return bound; > + > + priv = kvm_gmem_allocator_private(inode); > + nr_pages = kvm_gmem_allocator_ops(inode)->nr_pages_in_folio(priv); > + > + if (start) > + return round_down(bound, nr_pages); > + else > + return round_up(bound, nr_pages); > +} > + > +static pgoff_t kvm_gmem_compute_invalidate_start(struct inode *inode, > + pgoff_t bound) > +{ > + return kvm_gmem_compute_invalidate_bound(inode, bound, true); > +} > + > +static pgoff_t kvm_gmem_compute_invalidate_end(struct inode *inode, > + pgoff_t bound) > +{ > + return kvm_gmem_compute_invalidate_bound(inode, bound, false); > +} > +