Hi, Recently I hit a bug when developing the large folios support for btrfs. That we call filemap_get_folios_contig(), then lock each returned folio. (We also have a case where we unlock each returned folio) However since a large folio can be returned several times in the batch, this obviously makes a deadlock, as btrfs is trying to lock the same folio more than once. Then I looked into the caller of filemap_get_folios_contig() inside mm/gup, and it indeed does the correct skip. This makes me wonder, since we have large folios, why we still go filemap_get_folios_contig() and skip duplicated large folios? Isn't the purpose of large folios to handle a much large range in just one go, without going through multiple pages? And there are only 3 call sites, two of them are nilfs and ramfs, neither support large folios, the only caller with large folio support is the memfd_pin_folios(), which skip duplicated folios manually. I'm wondering if it's possible to make filemap_get_folios_contig() to avoid filling the batch with duplicated folios completely? Thanks, Qu