Re: [PATCH 1/4] writeback: Avoid contention on wb->list_lock when switching inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Jan.

On Tue, Sep 09, 2025 at 04:44:02PM +0200, Jan Kara wrote:
> There can be multiple inode switch works that are trying to switch
> inodes to / from the same wb. This can happen in particular if some
> cgroup exits which owns many (thousands) inodes and we need to switch
> them all. In this case several inode_switch_wbs_work_fn() instances will
> be just spinning on the same wb->list_lock while only one of them makes
> forward progress. This wastes CPU cycles and quickly leads to softlockup
> reports and unusable system.
> 
> Instead of running several inode_switch_wbs_work_fn() instances in
> parallel switching to the same wb and contending on wb->list_lock, run
> just one instance and let the other isw items switching to this wb queue
> behind the one being processed.
> 
> Signed-off-by: Jan Kara <jack@xxxxxxx>
...
> +static void inode_switch_wbs_work_fn(struct work_struct *work)
> +{
> +	struct inode_switch_wbs_context *isw =
> +		container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
> +	struct bdi_writeback *new_wb = isw->new_wb;
> +	bool switch_running;
> +
> +	spin_lock_irq(&new_wb->work_lock);
> +	switch_running = !list_empty(&new_wb->switch_wbs_ctxs);
> +	list_add_tail(&isw->list, &new_wb->switch_wbs_ctxs);
> +	spin_unlock_irq(&new_wb->work_lock);
> +
> +	/*
> +	 * Let's leave the real work for the running worker since we'd just
> +	 * contend with it on wb->list_lock anyway.
> +	 */
> +	if (switch_running)
> +		return;
> +
> +	/* OK, we will be doing the switching work */
> +	wb_get(new_wb);
> +	spin_lock_irq(&new_wb->work_lock);
> +	while (!list_empty(&new_wb->switch_wbs_ctxs)) {
> +		isw = list_first_entry(&new_wb->switch_wbs_ctxs,
> +				       struct inode_switch_wbs_context, list);
> +		spin_unlock_irq(&new_wb->work_lock);
> +		process_inode_switch_wbs_work(isw);
> +		spin_lock_irq(&new_wb->work_lock);
> +		list_del(&isw->list);
> +		kfree(isw);
> +	}
> +	spin_unlock_irq(&new_wb->work_lock);
> +	wb_put(new_wb);
> +}

Would it be easier to achieve the same effect if we just reduced @max_active
when creating inode_switch_wbs? If we update cgroup_writeback_init() to use
the following instead:

        isw_wq = alloc_workqueue("inode_switch_wbs", WQ_UNBOUND, 1);

Wouldn't that achieve the same thing? Note the addition of WQ_UNBOUND isn't
strictly necessary but we're in the process of defaulting to unbound
workqueues, so might as well update it together. I can't think of any reason
why this would require per-cpu behavior.

Thanks.

-- 
tejun




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux