On Wed, Sep 03, 2025 at 10:56:46PM +0200, Peter Zijlstra wrote: > On Wed, Sep 03, 2025 at 10:41:21AM -1000, Tejun Heo wrote: > > Hello, > > > > On Wed, Sep 03, 2025 at 10:08:22PM +0200, Peter Zijlstra wrote: > > > > I'm a bit confused. This series doesn't have prep patches to add @rf to > > > > dl_server_pick_f. Is this the right patch? > > > > > > Patch 14 seems to be the proposed alternative, and I'm not liking that > > > at all. > > > > > > That rf passing was very much also needed for that other issue; I'm not > > > sure why that's gone away. > > > > Using balance() was my suggestion to stay within the current framework. If > > we want to add @rf to pick_task(), that's more fundamental change. We > > dropped the discussion in the other thread but I found it odd to add @rf to > > pick_task() while disallowing the use of @rf in non-dl-server pick path and > > if we want to allow that, we gotta solve the race between pick_task() > > dropping rq lock and the ttwu inserting high pri task. > > I thought the idea was to add rf unconditionally, dl-server or not, it > is needed in both cases. > > Yes, that race needs dealing with. We have this existing pattern that > checks if a higher class has runnable tasks and restarting the pick. > This is currently only done for pick_next_task_fair() but that can > easily be extended. > > You suggested maybe moving this to the ttwu side -- but up to this point > I thought we were in agreement. I'm not sure moving it to the ttwu side > makes things better; it would need ttwu to know a pick is in progress > and for which class. The existing restart pick is simpler, I think. > > Yes, the restart is somewhat more complicated if we want to deal with > the dl-server, but not terribly so. It could just store a snapshot of > rq->dl.dl_nr_running from before the pick and only restart if that went > up. Stepping back one step; per here: https://lore.kernel.org/all/20250819100838.GH3245006@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/T/#mf8f95d1c2637a2ac9d9ec8f71fffe064a5718fff the reason for dropping rq->lock is having to migrate a task from the global dispatch queue. Now, the current rules for migrating tasks are: WAKEUP: hold p->pi_lock, this serializes against ttwu() and if found blocked after taking the lock, you're sure it will stay blocked and you can call set_task_cpu(), then you can lock the target rq, enqueue the thing and call it a day. RUNNABLE: 1) hold both source and target rq->lock. 2) hold source rq->lock, set p->on_rq = TASK_ON_RQ_MIGRATING, dequeue, call set_task_cpu(), drop source rq->lock, take target rq->lock, enqueue, set p->on_rq = TASK_ON_RQ_QUEUED, drop target rq->lock. set_task_cpu() has a pile of assertions trying to make sure these rules are followed. Of concern here is the RUNNABLE thing -- if you want to strictly follow those rules, you're going to have to drop rq->lock in order to acquire the source rq->lock and all that. However, the actual reason we need to acquire the source rq->lock, is because that lock protects the data structures the task is on. Without taking the source rq->lock you're not protected from concurrent use, it could get scheduled in, or migrated elsewhere at the same time -- obviously bad things. Now, assuming you have a locking order like: p->pi_lock rq->lock dsq->lock When you do something like: __schedule() raw_spin_lock(rq->lock); next = pick_next_task() -> pick_task_scx() raw_spin_lock(dsq->lock); Then you are, in effect, in the RUNNABLE 1) case above. You hold both locks. Nothing is going to move your task around while you hold that dsq->lock. That task is on the dsq, anybody else wanting to also do anything with that task, will have to first take dsq->lock. Therefore, at this point, it is perfectly fine to do: set_task_cpu(cpu_of(rq)); // move task here There is no actual concurrency. The only thing there is is set_task_cpu() complaining you're not following the rules -- but you are, it just doesn't know -- and we can fix that. Or am I still missing something?