On Tue, Apr 08, 2025 at 05:31:19PM +0200, Jesper Dangaard Brouer wrote: > In production, we're seeing TX drops on veth devices when the ptr_ring > fills up. This can occur when NAPI mode is enabled, though it's > relatively rare. However, with threaded NAPI - which we use in > production - the drops become significantly more frequent. > > The underlying issue is that with threaded NAPI, the consumer often runs > on a different CPU than the producer. This increases the likelihood of > the ring filling up before the consumer gets scheduled, especially under > load, leading to drops in veth_xmit() (ndo_start_xmit()). > > This patch introduces backpressure by returning NETDEV_TX_BUSY when the > ring is full, signaling the qdisc layer to requeue the packet. The txq > (netdev queue) is stopped in this condition and restarted once > veth_poll() drains entries from the ring, ensuring coordination between > NAPI and qdisc. > > Backpressure is only enabled when a qdisc is attached. Without a qdisc, > the driver retains its original behavior - dropping packets immediately > when the ring is full. This avoids unexpected behavior changes in setups > without a configured qdisc. > > With a qdisc in place (e.g. fq, sfq) this allows Active Queue Management > (AQM) to fairly schedule packets across flows and reduce collateral > damage from elephant flows. > > A known limitation of this approach is that the full ring sits in front > of the qdisc layer, effectively forming a FIFO buffer that introduces > base latency. While AQM still improves fairness and mitigates flow > dominance, the latency impact is measurable. > > In hardware drivers, this issue is typically addressed using BQL (Byte > Queue Limits), which tracks in-flight bytes needed based on physical link > rate. However, for virtual drivers like veth, there is no fixed bandwidth > constraint - the bottleneck is CPU availability and the scheduler's ability > to run the NAPI thread. It is unclear how effective BQL would be in this > context. > > This patch serves as a first step toward addressing TX drops. Future work > may explore adapting a BQL-like mechanism to better suit virtual devices > like veth. > > Reported-by: Yan Zhai <yan@xxxxxxxxxxxxxx> > Signed-off-by: Jesper Dangaard Brouer <hawk@xxxxxxxxxx> Thanks Jesper, It's very nice to see backpressure support being added here. ... > @@ -874,9 +909,16 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget, > struct veth_xdp_tx_bq *bq, > struct veth_stats *stats) > { > + struct veth_priv *priv = netdev_priv(rq->dev); > + int queue_idx = rq->xdp_rxq.queue_index; > + struct netdev_queue *peer_txq; > + struct net_device *peer_dev; > int i, done = 0, n_xdpf = 0; > void *xdpf[VETH_XDP_BATCH]; > > + peer_dev = priv->peer; I think you need to take into account RCU here. Sparse says: .../veth.c:919:18: warning: incorrect type in assignment (different address spaces) .../veth.c:919:18: expected struct net_device *peer_dev .../veth.c:919:18: got struct net_device [noderef] __rcu *peer > + peer_txq = netdev_get_tx_queue(peer_dev, queue_idx); > + > for (i = 0; i < budget; i++) { > void *ptr = __ptr_ring_consume(&rq->xdp_ring); > ...