RE: [Intel-wired-lan] [PATCH iwl-next v5 03/13] idpf: use a saner limit for default number of queues to allocate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@xxxxxxxxxx> On Behalf Of
> Alexander Lobakin
> Sent: Tuesday, August 26, 2025 9:25 PM
> To: intel-wired-lan@xxxxxxxxxxxxxxxx
> Cc: Lobakin, Aleksander <aleksander.lobakin@xxxxxxxxx>; Kubiak, Michal
> <michal.kubiak@xxxxxxxxx>; Fijalkowski, Maciej
> <maciej.fijalkowski@xxxxxxxxx>; Nguyen, Anthony L
> <anthony.l.nguyen@xxxxxxxxx>; Kitszel, Przemyslaw
> <przemyslaw.kitszel@xxxxxxxxx>; Andrew Lunn <andrew+netdev@xxxxxxx>;
> David S. Miller <davem@xxxxxxxxxxxxx>; Eric Dumazet
> <edumazet@xxxxxxxxxx>; Jakub Kicinski <kuba@xxxxxxxxxx>; Paolo Abeni
> <pabeni@xxxxxxxxxx>; Alexei Starovoitov <ast@xxxxxxxxxx>; Daniel
> Borkmann <daniel@xxxxxxxxxxxxx>; Simon Horman <horms@xxxxxxxxxx>;
> NXNE CNSE OSDT ITP Upstreaming
> <nxne.cnse.osdt.itp.upstreaming@xxxxxxxxx>; bpf@xxxxxxxxxxxxxxx;
> netdev@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> Subject: [Intel-wired-lan] [PATCH iwl-next v5 03/13] idpf: use a saner limit for
> default number of queues to allocate
> 
> Currently, the maximum number of queues available for one vport is 16.
> This is hardcoded, but then the function calculating the optimal number of
> queues takes min(16, num_online_cpus()).
> In order to be able to allocate more queues, which will be then used for XDP,
> stop hardcoding 16 and rely on what the device gives us[*]. Instead of
> num_online_cpus(), which is considered suboptimal since at least 2013, use
> netif_get_num_default_rss_queues() to still have free queues in the pool.
> 
> [*] With the note:
> 
> Currently, idpf always allocates `IDPF_MAX_BUFQS_PER_RXQ_GRP` (== 2)
> buffer queues for each Rx queue and one completion queue for each Tx for
> best performance. But there was no check whether such number is availabe,
> IOW the assumption was not backed by any "harmonizing" / actual checks.
> Fix this while at it.
> 
> nr_cpu_ids number of Tx queues are needed only for lockless XDP sending,
> the regular stack doesn't benefit from that anyhow.
> On a 128-thread Xeon, this now gives me 32 regular Tx queues and leaves
> 224 free for XDP (128 of which will handle XDP_TX, .ndo_xdp_xmit(), and XSk
> xmit when enabled).
> 
> Note 2:
> 
> Unfortunately, some CP/FW versions are not able to
> reconfigure/enable/disable large amount of queues within the minimum
> timeout (2 seconds). For now, fall back to the default timeout for every
> operation until this is resolved.
> 
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx>
> ---
>  .../net/ethernet/intel/idpf/idpf_virtchnl.h   |  1 -
>  drivers/net/ethernet/intel/idpf/idpf_txrx.c   |  8 +--
>  .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 62 +++++++++++--------
>  3 files changed, 38 insertions(+), 33 deletions(-)
> 
Tested-by: R,Ramu <ramu.r@xxxxxxxxx>





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux