Re: Improper io_opt setting for md raid5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Csordás,

I basically agree with everything in your analysis.

>> Then in drivers/scsi/sd.c, inside sd_revalidate_disk() from the following coce,
>> 3785         /*
>> 3786          * Limit default to SCSI host optimal sector limit if set. There may be
>> 3787          * an impact on performance for when the size of a request exceeds this
>> 3788          * host limit.
>> 3789          */
>> 3790         lim.io_opt = sdp->host->opt_sectors << SECTOR_SHIFT;
>> 3791         if (sd_validate_opt_xfer_size(sdkp, dev_max)) {
>> 3792                 lim.io_opt = min_not_zero(lim.io_opt,
>> 3793                                 logical_to_bytes(sdp, sdkp->opt_xfer_blocks));
>> 3794         }
>> 
>> lim.io_opt of all my sata disks attached to mpt3sas HBA are all 32767 sectors,
>> because the above code block.

shost->opt_sectors was originally used to seed rw_max and not io_opt, I
think that is the appropriate thing to do.

A SCSI host driver reporting some ludicrous limit is not necessarily
representative of a "good" I/O size as far as neither disk drive, nor
the Linux I/O stack is concerned.

shost->opt_sectors should clearly set an upper bound for max_sectors.
But I don't think we should ever increase any queue limit based on what
is reported in opt_sectors.

-- 
Martin K. Petersen





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux