On Tue, May 27, 2025 at 08:29:56AM -0700, Jakub Kicinski wrote: > On Thu, 22 May 2025 16:08:48 -0700 Saeed Mahameed wrote: > > On 22 May 15:30, Jakub Kicinski wrote: > > >On Fri, 23 May 2025 00:41:21 +0300 Tariq Toukan wrote: > > >> Allocate a separate page pool for headers when SHAMPO is enabled. > > >> This will be useful for adding support to zc page pool, which has to be > > >> different from the headers page pool. > > > > > >Could you explain why always allocate a separate pool? > > > > Better flow management, 0 conditional code on data path to alloc/return > > header buffers, since in mlx5 we already have separate paths to handle > > header, we don't have/need bnxt_separate_head_pool() and > > rxr->need_head_pool spread across the code.. > > > > Since we alloc and return pages in bulks, it makes more sense to manage > > headers and data in separate pools if we are going to do it anyway for > > "undreadable_pools", and when there's no performance impact. > > I think you need to look closer at the bnxt implementation. > There is no conditional on the buffer alloc path. If the head and > payload pools are identical we simply assign the same pointer to > (using mlx5 naming) page_pool and hd_page_pool. > > Your arguments are not very convincing, TBH. > The memory sitting in the recycling rings is very much not free. I can add 2 more small argumens for always using 2 page pools: - For large ring size + high MTU the page_pool size will go above the internal limit of the page_pool in HW GRO mode. - Debugability (already mentioned by Saeed in the counters pach): if something goes wrong (page leaks for example) we can easily pinpoint to where the issue is. Thanks, Dragos