On Wed, May 14, 2025 at 4:43 PM Ackerley Tng <ackerleytng@xxxxxxxxxx> wrote: > ... > +/** > + * kvm_gmem_zero_range() - Zeroes all sub-pages in range [@start, @end). > + * > + * @mapping: the filemap to remove this range from. > + * @start: index in filemap for start of range (inclusive). > + * @end: index in filemap for end of range (exclusive). > + * > + * The pages in range may be split. truncate_inode_pages_range() isn't the right > + * function because it removes pages from the page cache; this function only > + * zeroes the pages. > + */ > +static void kvm_gmem_zero_range(struct address_space *mapping, > + pgoff_t start, pgoff_t end) > +{ > + struct folio_batch fbatch; > + > + folio_batch_init(&fbatch); > + while (filemap_get_folios(mapping, &start, end - 1, &fbatch)) { > + unsigned int i; > + > + for (i = 0; i < folio_batch_count(&fbatch); ++i) { > + struct folio *f; > + size_t nr_bytes; > + > + f = fbatch.folios[i]; > + nr_bytes = offset_in_folio(f, end << PAGE_SHIFT); > + if (nr_bytes == 0) > + nr_bytes = folio_size(f); > + > + folio_zero_segment(f, 0, nr_bytes); folio_zero_segment takes byte offset and number of bytes within the folio. This invocation needs to operate on the folio range that is overlapping with [Start, end) and instead it's always starting from 0 and ending at an unaligned offset within the folio. This will result in zeroing more than requested or lesser than requested or both depending on the request and folio size. > + } > + > + folio_batch_release(&fbatch); > + cond_resched(); > + } > +} > +