On Wed 10-09-25 07:10:12, Tejun Heo wrote: > Hello, Jan. > > On Wed, Sep 10, 2025 at 10:19:36AM +0200, Jan Kara wrote: > > Well, reducing @max_active to 1 will certainly deal with the list_lock > > contention as well. But I didn't want to do that as on a busy container > > system I assume there can be switching happening between different pairs of > > cgroups. With the approach in this patch switches with different target > > cgroups can still run in parallel. I don't have any real world data to back > > that assumption so if you think this parallelism isn't really needed and we > > are fine with at most one switch happening in the system, switching > > max_active to 1 is certainly simple enough. > > What bothers me is that the concurrency doesn't match between the work items > being scheduled and the actual execution and we're resolving that by early > exiting from some work items. It just feels like an roundabout way to do it > with extra code. I think there are better ways to achieve per-bdi_writeback > concurrency: > > - Move work_struct from isw to bdi_writeback and schedule the work item on > the target wb which processes isw's queued on the bdi_writeback. > > - Or have a per-wb workqueue with max_active limit so that concurrency is > regulated per-wb. > > The latter is a bit simpler but does cost more memory as workqueue_struct > isn't tiny. The former is a bit more complicated but most likely less so > than the current code. What do you think? That's a fair objection and good idea. I'll rework the patch to go with the first variant. Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR