On Mon, Apr 07, 2025 at 05:30:30PM -0700, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > Prior to commit e614a00117bc2d, xmbuf_map_backing_mem relied on > folio_file_page to return the base page for the xmbuf's loff_t in the > xfile, and set b_addr to the page_address of that base page. > > Now that folio_file_page has been removed from xmbuf_map_backing_mem, we > always set b_addr to the folio_address of the folio. This is correct > for the situation where the folio size matches the buffer size, but it's > totally wrong if tmpfs uses large folios. We need to use > offset_in_folio here. > > Found via xfs/801, which demonstrated evidence of corruption of an > in-memory rmap btree block right after initializing an adjacent block. Hmm, I thought we'd never get large folios for our non-standard tmpfs use. I guess I was wrong on that.. The fix looks good: Reviewed-by: Christoph Hellwig <hch@xxxxxx> But a little note below: > + bp->b_addr = folio_address(folio) + offset_in_folio(folio, pos); Given that this is or at least will become a common pattern, do we want a mm layer helper for it?