On Tue, 15 Apr 2025 11:10:01 +0200 Justin Iurman wrote: > > However, there is my opinion an issue that can occur: between the check on > > in_softirq() and the call to local_bh_disable(), the task may be scheduled on > > another CPU. As a result, the check on in_softirq() becomes ineffective because > > we may end up disabling BH on a CPU that is not the one we just checked (with > > if (in_softirq()) { ... }). The context is not affected by migration. The context is fully defined by the execution stack. > Hmm, I think it's correct... good catch. I went for this solution to (i) > avoid useless nested BHs disable calls; and (ii) avoid ending up with a > spaghetti graph of possible paths with or without BHs disabled (i.e., > with single entry points, namely lwtunnel_xmit() and lwtunnel_output()), > which otherwise makes it hard to maintain the code IMO. > > So, if we want to follow what Alexei suggests (see his last response), > we'd need to disable BHs in both ip_local_out() and ip6_local_out(). > These are the common functions which are closest in depth, and so for > both lwtunnel_xmit() and lwtunnel_output(). But... at the "cost" of > disabling BHs even when it may not be required. Indeed, ip_local_out() > and ip6_local_out() both call dst_output(), which one is usually not > lwtunnel_output() (and there may not even be a lwtunnel_xmit() to call > either). > > The other solution is to always call local_bh_disable() in both > lwtunnel_xmit() and lwtunnel_output(), at the cost of disabling BHs when > they were already. Which was basically -v1 and received a NACK from Alexei. I thought he nacked preempt_disable()