On Fri, 15 Aug 2025 18:02:10 +0200 Jesper Dangaard Brouer wrote: > >> Yes, something like that, but I would like Kuba/Jakub's input, as IIRC > >> he introduced the page_pool->cpuid and page_pool->napi. > >> > >> There are some corner-cases we need to consider if they are valid. If > >> cpumap get redirected to the *same* CPU as "previous" NAPI instance, > >> which then makes page_pool->cpuid match, is it then still valid to do > >> "direct" return(?). > > > > I think/hope so, but it depends on xdp_return only being called from > > softirq context.. Since softirqs can't nest if producer and consumer > > of the page pool pages are on the same CPU they can't race. > > That is true, softirqs can't nest. > > Jesse pointed me at the tun device driver, where we in-principle are > missing a xdp_set_return_frame_no_direct() section. Except I believe, > that the memory type cannot be page_pool in this driver. (Code hint, > tun_xdp_act() calls xdp_do_redirect). > > The tun driver made me realize, that we do have users that doesn't run > under a softirq, but they do remember to disable BH. (IIRC BH-disable > can nest). Are we also race safe in this case(?). Yes, it should be. But chances of direct recycling happening in this case are rather low since NAPI needs to be pending to be considered owned. If we're coming from process context BHs are likely not pending. > Is the code change as simple as below or did I miss something? > > void __xdp_return > [...] > case MEM_TYPE_PAGE_POOL: > [...] > if (napi_direct && READ_ONCE(pool->cpuid) != smp_processor_id()) > napi_direct = false; cpuid is a different beast, NAPI-based direct recycling logic is in page_pool_napi_local() (and we should not let it leak out to XDP, just unref the page and PP will "override" the "napi_safe" argument). > It is true, that when we exit NAPI, then pool->cpuid becomes -1. > Or what that only during shutdown?