When a Linux Wifi{4,5} device talks to a Wifi6 AP, if the AP proposes a Block Acknowledgement aggregation size (ADDBA) exceeding its expectations, the code in mac80211 just bails out, rejecting the aggregation. This yields a big performance penalty on the ack path, which is observable in comparison with other OSes (Windows and MacOS) which "play smarter" and accept the proposal with a "clipped" size. A typical scenario would be: AP -> Device : ADDBA_request(size=256) Current Linux reaction: Device -> AP : ADDBA_reply(failure) Other OSes reaction: Device -> AP : ADDBA_reply(size=64) Note that the IEEE802.11 standard allows for both reactions, but it sounds really suboptimal to be bailing out instead of clipping. The patch below does the latter. Signed-off-by: Alexandre Ferrieux <alexandre.ferrieux@xxxxxxxxx> --- diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c index f3fbe5a4395e..264dad847842 100644 --- a/net/mac80211/agg-rx.c +++ b/net/mac80211/agg-rx.c @@ -317,18 +317,20 @@ void __ieee80211_start_rx_ba_session(struct sta_info *sta, max_buf_size = IEEE80211_MAX_AMPDU_BUF_HT; /* sanity check for incoming parameters: - * check if configuration can support the BA policy - * and if buffer size does not exceeds max value */ + * check if configuration can support the BA policy */ /* XXX: check own ht delayed BA capability?? */ if (((ba_policy != 1) && - (!(sta->sta.deflink.ht_cap.cap & IEEE80211_HT_CAP_DELAY_BA))) || - (buf_size > max_buf_size)) { - status = WLAN_STATUS_INVALID_QOS_PARAM; + (!(sta->sta.deflink.ht_cap.cap & IEEE80211_HT_CAP_DELAY_BA)))) { + status = WLAN_STATUS_INVALID_QOS_PARAM; ht_dbg_ratelimited(sta->sdata, "AddBA Req with bad params from %pM on tid %u. policy %d, buffer size %d\n", sta->sta.addr, tid, ba_policy, buf_size); goto end; } + if (buf_size > max_buf_size) { + buf_size = max_buf_size ; // Clip instead of bailing out + } + /* determine default buffer size */ if (buf_size == 0) buf_size = max_buf_size;