Re: [PATCH v2] dm-verity: support block number limits for different ioprio classes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Thu, 27 Mar 2025, Eric Biggers wrote:

> On Thu, Mar 27, 2025 at 06:12:22PM +0100, Mikulas Patocka wrote:
> > 
> > 
> > On Thu, 27 Mar 2025, Eric Biggers wrote:
> > 
> > > > Hi, Eric
> > > > 
> > > > It seems that verity_prefetch_io doesn't work efficiently.
> > > > dm_bufio_prefetch_with_ioprio
> > > >   __dm_bufio_prefetch
> > > >     blk_start_plug
> > > >     for loop
> > > >       __bufio_new
> > > >       submit_io
> > > >       cond_resched
> > > >     blk_finish_plug
> > > > 
> > > > If more than one hash blocks need to be prefetched, cond_resched will
> > > > be called in each loop and blk_finish_plug will be called at the end.
> > > > 
> > > > The requests for hash blocks may have not been dispatched when the
> > > > requests for data blocks have been completed.
> > > 
> > > Well, it's always possible for the prefetch I/O to be too slow, but the way it
> > > works does seem to be unnecessarily bad.  The prefetch work runs on a kworker,
> > > which is going to slow things down.  The way it should work is that verity_map()
> > > runs the prefetch work and starts, but doesn't wait for, any I/O that is needed.
> > > (And of course, most of the time no I/O will actually be needed.)
> > > 
> > > - Eric
> > 
> > dm-verity used to prefetch from the verity_map function, but it caused a 
> > deadlock - 3b6b7813b198b578aa7e04e4047ddb8225c37b7f - so, it was moved to 
> > a workqueue.
> 
> Interesting.  Unfortunately I think the workqueue makes it way worse.
> 
> In principle there is no need for it to block for anything.  If it would have to
> block, it just should just continue on.
> 
> - Eric

That seems to be right concern. Should I disable prefetch and add a switch 
to enable it on demand? Or delete the prefetch code entirely?

There is also a bug that prefetch may race with suspend, sending I/Os even 
when the dm-verity device is suspended - but that is fixable by adding 
flush_workqueue to the postsuspend hook.

Mikulas






[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux