Re: [PATCH 2/9] ublk: properly serialize all FETCH_REQs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 14, 2025 at 02:52:59PM -0600, Uday Shankar wrote:
> On Mon, Apr 14, 2025 at 02:39:33PM -0600, Jens Axboe wrote:
> > On 4/14/25 1:58 PM, Uday Shankar wrote:
> > > +static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
> > > +		      struct ublk_queue *ubq, struct ublk_io *io,
> > > +		      const struct ublksrv_io_cmd *ub_cmd,
> > > +		      unsigned int issue_flags)
> > > +{
> > > +	int ret = 0;
> > > +
> > > +	if (issue_flags & IO_URING_F_NONBLOCK)
> > > +		return -EAGAIN;
> > > +
> > > +	mutex_lock(&ub->mutex);
> > 
> > This looks like overkill, if we can trylock the mutex that should surely
> > be fine? And I would imagine succeed most of the time, hence making the
> > inline/fastpath fine with F_NONBLOCK?
> 
> Yeah, makes sense. How about this?
> 
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index cdb1543fa4a9..bf4a88cb1413 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -1832,8 +1832,8 @@ static void ublk_nosrv_work(struct work_struct *work)
>  
>  /* device can only be started after all IOs are ready */
>  static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
> +	__must_hold(&ub->mutex)
>  {
> -	mutex_lock(&ub->mutex);
>  	ubq->nr_io_ready++;
>  	if (ublk_queue_ready(ubq)) {
>  		ubq->ubq_daemon = current;
> @@ -1845,7 +1845,6 @@ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
>  	}
>  	if (ub->nr_queues_ready == ub->dev_info.nr_hw_queues)
>  		complete_all(&ub->completion);
> -	mutex_unlock(&ub->mutex);
>  }
>  
>  static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id,
> @@ -1929,6 +1928,55 @@ static int ublk_unregister_io_buf(struct io_uring_cmd *cmd,
>  	return io_buffer_unregister_bvec(cmd, index, issue_flags);
>  }
>  
> +static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
> +		      struct ublk_queue *ubq, struct ublk_io *io,
> +		      const struct ublksrv_io_cmd *ub_cmd,
> +		      unsigned int issue_flags)
> +{
> +	int ret = 0;
> +
> +	if (!mutex_trylock(&ub->mutex)) {
> +		if (issue_flags & IO_URING_F_NONBLOCK)
> +			return -EAGAIN;
> +		else
> +			mutex_lock(&ub->mutex);

Thinking of further, looks ub->mutex has been fat enough, here we can
use ub->lock(spin lock) to serialize the setup, then trylock & -EAGAIN
can be avoided.

It is fine to replace the mutex in ublk_mark_io_ready() with spinlock
actually.



Thanks,
Ming





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux