Re: Improper io_opt setting for md raid5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 15, 2025 at 11:56:57PM +0800, Coly Li wrote:
> 240         if (dma_dev->dma_mask) {
> 241                 shost->opt_sectors = min_t(unsigned int, shost->max_sectors,
> 242                                 dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
> 243         }

Just comparing how NVMe uses dma_opt_mapping_size(), that return is used
to limit its "max_sectors" rather than opt_sectors, so this different
usages seems odd to me. But there doesn't appear to be anything else
setting shost->opt_sectors either.
 
> Then in drivers/scsi/sd.c, inside sd_revalidate_disk() from the following coce,
> 3785         /*
> 3786          * Limit default to SCSI host optimal sector limit if set. There may be
> 3787          * an impact on performance for when the size of a request exceeds this
> 3788          * host limit.
> 3789          */
> 3790         lim.io_opt = sdp->host->opt_sectors << SECTOR_SHIFT;

Checking where "opt_sectors" was introduced, 608128d391fa5c9 says it was
to provide the host optimal sectors, but the io_opt limit is supposed to
be the device's. Seems to be a mistmatch in usage here, as "opt_sectors"
should only be the upper limit for "io_opt" rather than the starting
value.




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux