On Tue, Apr 08, 2025 at 10:48:55AM -0700, Darrick J. Wong wrote: > On Tue, Apr 08, 2025 at 10:24:40AM -0700, Luis Chamberlain wrote: > > On Tue, Apr 8, 2025 at 10:06 AM Luis Chamberlain <mcgrof@xxxxxxxxxx> wrote: > > > Fun > > > puzzle for the community is figuring out *why* oh why did a large folio > > > end up being used on buffer-heads for your use case *without* an LBS > > > device (logical block size) being present, as I assume you didn't have > > > one, ie say a nvme or virtio block device with logical block size > > > > PAGE_SIZE. The area in question would trigger on folio migration *only* > > > if you are migrating large buffer-head folios. We only create those > > > > To be clear, large folios for buffer-heads. > > > if > > > you have an LBS device and are leveraging the block device cache or a > > > filesystem with buffer-heads with LBS (they don't exist yet other than > > > the block device cache). > > My guess is that udev or something tries to read the disk label in > response to some uevent (mkfs, mount, unmount, etc), which creates a > large folio because min_order > 0, and attaches a buffer head. There's > a separate crash report that I'll cc you on. OK so as willy pointed out I buy that for x86_64 *iff* we do already have opportunistic large folio support for the buffer-head read/write path. But also, I don't think we enable large folios yet on the block device cache aops unless we have a min order block device... so what gives? Luis