On 05.08.25 15:15, Linus Torvalds wrote:
On Tue, 5 Aug 2025 at 16:05, David Hildenbrand <david@xxxxxxxxxx> wrote:
So I don't like the idea of micro-optimizing num_pages_contiguous() by
adding weird tweaks to the core for that.
Seriously - take a look at that suggested sequence I posted, and tell
me that it isn't *MORE* obvious than the horror that is nth_page().
Honestly, if anybody thinks nth_page() is obvious and good, I think
they have some bad case of Stockholm syndrome.
This isn't about micro-optimizing. This is about not writing complete
garbage code that makes no sense.
nth_page() is a disgusting thing that is designed to look up
known-contiguous pages. That code mis-used it for *testing* for being
contiguous. It may have _worked_, but it was the wrong thing to do.
nth_page() in general should just not exist. I don't actually believe
there is any valid reason for it. I do not believe we should actually
have valid consecutive allocations of pages across sections.
Oh, just to add to that, 1 GiB folios (hugetlb, dax) are the main reason
why we use it in things like folio_page(), and also why folio_page_idx()
is so horrible.
--
Cheers,
David / dhildenb