Large folios and filemap_get_folios_contig()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Recently I hit a bug when developing the large folios support for btrfs.

That we call filemap_get_folios_contig(), then lock each returned folio.
(We also have a case where we unlock each returned folio)

However since a large folio can be returned several times in the batch,
this obviously makes a deadlock, as btrfs is trying to lock the same
folio more than once.

Then I looked into the caller of filemap_get_folios_contig() inside
mm/gup, and it indeed does the correct skip.


This makes me wonder, since we have large folios, why we still go
filemap_get_folios_contig() and skip duplicated large folios?

Isn't the purpose of large folios to handle a much large range in just
one go, without going through multiple pages?


And there are only 3 call sites, two of them are nilfs and ramfs,
neither support large folios, the only caller with large folio support
is the memfd_pin_folios(), which skip duplicated folios manually.

I'm wondering if it's possible to make filemap_get_folios_contig() to
avoid filling the batch with duplicated folios completely?

Thanks,
Qu





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux