On Thu 11-09-25 13:30:13, Jan Kara wrote: > On Wed 10-09-25 07:10:12, Tejun Heo wrote: > > Hello, Jan. > > > > On Wed, Sep 10, 2025 at 10:19:36AM +0200, Jan Kara wrote: > > > Well, reducing @max_active to 1 will certainly deal with the list_lock > > > contention as well. But I didn't want to do that as on a busy container > > > system I assume there can be switching happening between different pairs of > > > cgroups. With the approach in this patch switches with different target > > > cgroups can still run in parallel. I don't have any real world data to back > > > that assumption so if you think this parallelism isn't really needed and we > > > are fine with at most one switch happening in the system, switching > > > max_active to 1 is certainly simple enough. > > > > What bothers me is that the concurrency doesn't match between the work items > > being scheduled and the actual execution and we're resolving that by early > > exiting from some work items. It just feels like an roundabout way to do it > > with extra code. I think there are better ways to achieve per-bdi_writeback > > concurrency: > > > > - Move work_struct from isw to bdi_writeback and schedule the work item on > > the target wb which processes isw's queued on the bdi_writeback. > > > > - Or have a per-wb workqueue with max_active limit so that concurrency is > > regulated per-wb. > > > > The latter is a bit simpler but does cost more memory as workqueue_struct > > isn't tiny. The former is a bit more complicated but most likely less so > > than the current code. What do you think? > > That's a fair objection and good idea. I'll rework the patch to go with > the first variant. I've realized why I didn't do something like this from the beginning. The slight snag is that you can start switching inode's wb only once rcu period expires (so that I_WB_SWITCH setting is guaranteed to be visible). This makes moving the work struct to bdi_writeback sligthly tricky. It shouldn't be too bad so I think I'll try it and see how it looks like. Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR