Re: [PATCH 1/4] writeback: Avoid contention on wb->list_lock when switching inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Jan.

On Wed, Sep 10, 2025 at 10:19:36AM +0200, Jan Kara wrote:
> Well, reducing @max_active to 1 will certainly deal with the list_lock
> contention as well. But I didn't want to do that as on a busy container
> system I assume there can be switching happening between different pairs of
> cgroups. With the approach in this patch switches with different target
> cgroups can still run in parallel. I don't have any real world data to back
> that assumption so if you think this parallelism isn't really needed and we
> are fine with at most one switch happening in the system, switching
> max_active to 1 is certainly simple enough.

What bothers me is that the concurrency doesn't match between the work items
being scheduled and the actual execution and we're resolving that by early
exiting from some work items. It just feels like an roundabout way to do it
with extra code. I think there are better ways to achieve per-bdi_writeback
concurrency:

- Move work_struct from isw to bdi_writeback and schedule the work item on
  the target wb which processes isw's queued on the bdi_writeback.

- Or have a per-wb workqueue with max_active limit so that concurrency is
  regulated per-wb.

The latter is a bit simpler but does cost more memory as workqueue_struct
isn't tiny. The former is a bit more complicated but most likely less so
than the current code. What do you think?

Thanks.

-- 
tejun




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux