Re: 10x I/O await times in 6.12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

在 2025/04/22 18:45, Matt Fleming 写道:
On Tue, 22 Apr 2025 at 04:03, Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote:

So, either preempt takes a long time, or generate lots of bio to plug
takes a long time can both results in larger iostat IO latency. I still
think delay setting request start_time to blk_mq_flush_plug_list() might
be a reasonable fix.

I'll try out your proposed fix also. Is it not possible for a task to
be preempted during a blk_mq_flush_plug_list() call, e.g. in the
driver layer?

Let's focus on your regression first, preempt during flush plug doesn't
introduce new gap, rq->start_time_ns is before that already before
caching time in plug.

I understand that you might not want to issue I/O on preempt, but
that's a distinct problem from clearing the cached ktime no? There is
no upper bound on the amount of time a task might be scheduled out due
to preempt which means there is no limit to the staleness of that
value. I would assume the only safe thing to do (like is done for
various other timestamps) is reset it when the task gets scheduled
out.

Yes, it's resonable to clear cached time for the preempt case. What
I'm concerned is that even the task never scheduled out, the time can
still stale for milliseconds. I think that is possilble if a lots of bio
are endup in the same roud of plug(a lot of IO merge).

Thanks,
Kuai


.






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux