RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Michael S. Tsirkin <mst@xxxxxxxxxx>
> Sent: Wednesday, May 21, 2025 2:49 PM
> To: Parav Pandit <parav@xxxxxxxxxx>
> Cc: stefanha@xxxxxxxxxx; axboe@xxxxxxxxx; virtualization@xxxxxxxxxxxxxxx;
> linux-block@xxxxxxxxxxxxxx; stable@xxxxxxxxxxxxxxx; NBU-Contact-Li Rongqing
> (EXTERNAL) <lirongqing@xxxxxxxxx>; Chaitanya Kulkarni
> <chaitanyak@xxxxxxxxxx>; xuanzhuo@xxxxxxxxxxxxxxxxx;
> pbonzini@xxxxxxxxxx; jasowang@xxxxxxxxxx; Max Gurtovoy
> <mgurtovoy@xxxxxxxxxx>; Israel Rukshin <israelr@xxxxxxxxxx>
> Subject: Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise
> removal
> 
> On Wed, May 21, 2025 at 09:14:31AM +0000, Parav Pandit wrote:
> > > From: Michael S. Tsirkin <mst@xxxxxxxxxx>
> > > Sent: Wednesday, May 21, 2025 1:48 PM
> > >
> > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote:
> > > > When the PCI device is surprise removed, requests may not complete
> > > > the device as the VQ is marked as broken. Due to this, the disk
> > > > deletion hangs.
> > > >
> > > > Fix it by aborting the requests when the VQ is broken.
> > > >
> > > > With this fix now fio completes swiftly.
> > > > An alternative of IO timeout has been considered, however when the
> > > > driver knows about unresponsive block device, swiftly clearing
> > > > them enables users and upper layers to react quickly.
> > > >
> > > > Verified with multiple device unplug iterations with pending
> > > > requests in virtio used ring and some pending with the device.
> > > >
> > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of
> > > > virtio pci device")
> > > > Cc: stable@xxxxxxxxxxxxxxx
> > > > Reported-by: lirongqing@xxxxxxxxx
> > > > Closes:
> > > >
> > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4
> > > 74
> > > > 1@xxxxxxxxx/
> > > > Reviewed-by: Max Gurtovoy <mgurtovoy@xxxxxxxxxx>
> > > > Reviewed-by: Israel Rukshin <israelr@xxxxxxxxxx>
> > > > Signed-off-by: Parav Pandit <parav@xxxxxxxxxx>
> > > > ---
> > > > changelog:
> > > > v0->v1:
> > > > - Fixed comments from Stefan to rename a cleanup function
> > > > - Improved logic for handling any outstanding requests
> > > >   in bio layer
> > > > - improved cancel callback to sync with ongoing done()
> > >
> > > thanks for the patch!
> > > questions:
> > >
> > >
> > > > ---
> > > >  drivers/block/virtio_blk.c | 95
> > > > ++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 95 insertions(+)
> > > >
> > > > diff --git a/drivers/block/virtio_blk.c
> > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7
> > > > 100644
> > > > --- a/drivers/block/virtio_blk.c
> > > > +++ b/drivers/block/virtio_blk.c
> > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct
> > > blk_mq_hw_ctx *hctx,
> > > >  	blk_status_t status;
> > > >  	int err;
> > > >
> > > > +	/* Immediately fail all incoming requests if the vq is broken.
> > > > +	 * Once the queue is unquiesced, upper block layer flushes any
> > > pending
> > > > +	 * queued requests; fail them right away.
> > > > +	 */
> > > > +	if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq)))
> > > > +		return BLK_STS_IOERR;
> > > > +
> > > >  	status = virtblk_prep_rq(hctx, vblk, req, vbr);
> > > >  	if (unlikely(status))
> > > >  		return status;
> > >
> > > just below this:
> > >         spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
> > >         err = virtblk_add_req(vblk->vqs[qid].vq, vbr);
> > >         if (err) {
> > >
> > >
> > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken vq.
> > >
> > > Why do we need to check it one extra time here?
> > >
> > It may work, but for some reason if the hw queue is stopped in this flow, it
> can hang the IOs flushing.
> 
> > I considered it risky to rely on the error code ENOSPC returned by non virtio-
> blk driver.
> > In other words, if lower layer changed for some reason, we may end up in
> stopping the hw queue when broken, and requests would hang.
> >
> > Compared to that one-time entry check seems more robust.
> 
> I don't get it.
> Checking twice in a row is more robust?
No. I am not confident on the relying on the error code -ENOSPC from layers outside of virtio-blk driver.

If for a broken VQ, ENOSPC arrives, then hw queue is stopped and requests could be stuck.

> What am I missing?
> Can you describe the scenario in more detail?
> 
> >
> > >
> > >
> > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list
> *rqlist)
> > > >  	while ((req = rq_list_pop(rqlist))) {
> > > >  		struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req-
> > > >mq_hctx);
> > > >
> > > > +		if (unlikely(virtqueue_is_broken(this_vq->vq))) {
> > > > +			rq_list_add_tail(&requeue_list, req);
> > > > +			continue;
> > > > +		}
> > > > +
> > > >  		if (vq && vq != this_vq)
> > > >  			virtblk_add_req_batch(vq, &submit_list);
> > > >  		vq = this_vq;
> > >
> > > similarly
> > >
> > The error code is not surfacing up here from virtblk_add_req().
> 
> 
> but wait a sec:
> 
> static void virtblk_add_req_batch(struct virtio_blk_vq *vq,
>                 struct rq_list *rqlist)
> {
>         struct request *req;
>         unsigned long flags;
>         bool kick;
> 
>         spin_lock_irqsave(&vq->lock, flags);
> 
>         while ((req = rq_list_pop(rqlist))) {
>                 struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
>                 int err;
> 
>                 err = virtblk_add_req(vq->vq, vbr);
>                 if (err) {
>                         virtblk_unmap_data(req, vbr);
>                         virtblk_cleanup_cmd(req);
>                         blk_mq_requeue_request(req, true);
>                 }
>         }
> 
>         kick = virtqueue_kick_prepare(vq->vq);
>         spin_unlock_irqrestore(&vq->lock, flags);
> 
>         if (kick)
>                 virtqueue_notify(vq->vq); }
> 
> 
> it actually handles the error internally?
> 
For all the errors it requeues the request here.

> 
> 
> 
> > It would end up adding checking for special error code here as well to abort
> by translating broken VQ -> EIO to break the loop in virtblk_add_req_batch().
> >
> > Weighing on specific error code-based data path that may require audit from
> lower layers now and future, an explicit check of broken in this layer could be
> better.
> >
> > [..]
> 
> 
> Checking add was successful is preferred because it has to be done
> *anyway* - device can get broken after you check before add.
> 
> So I would like to understand why are we also checking explicitly and I do not
> get it so far.

checking explicitly to not depend on specific error code-based logic.





[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux