回复:Re: After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
Thanks for your reply. I will check if there are any changes in other parameters. Thank you.


Best Regards
wu_chulin@xxxxxx

Best Regards


        



         原始邮件
         
       
发件人:Anthony D'Atri <anthony.datri@xxxxxxxxx&gt;
发件时间:2025年9月8日 22:59
收件人:Best Regards <wu_chulin@xxxxxx&gt;
抄送:ceph-users <ceph-users@xxxxxxx&gt;
主题: Re: After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.



       This&nbsp;is&nbsp;baked&nbsp;into&nbsp;each&nbsp;OSD&nbsp;when&nbsp;created,&nbsp;the&nbsp;only&nbsp;way&nbsp;to&nbsp;change&nbsp;the&nbsp;effective&nbsp;value&nbsp;is&nbsp;to&nbsp;recreate&nbsp;the&nbsp;OSD.

`ceph&nbsp;osd&nbsp;metadata`&nbsp;will&nbsp;show&nbsp;you&nbsp;the&nbsp;value&nbsp;with&nbsp;which&nbsp;each&nbsp;OSD&nbsp;was&nbsp;created.

The&nbsp;change&nbsp;was&nbsp;done&nbsp;in&nbsp;part&nbsp;so&nbsp;that&nbsp;RGW&nbsp;bucket&nbsp;data&nbsp;pools&nbsp;with&nbsp;very&nbsp;small&nbsp;S3&nbsp;objects&nbsp;don't&nbsp;suffer&nbsp;massive&nbsp;space&nbsp;amp

&gt;&nbsp;On&nbsp;Sep&nbsp;8,&nbsp;2025,&nbsp;at&nbsp;6:33 AM,&nbsp;Best&nbsp;Regards&nbsp;<wu_chulin@xxxxxx&gt;&nbsp;wrote:
&gt;&nbsp;
&gt;&nbsp;I&nbsp;searched&nbsp;a&nbsp;lot&nbsp;of&nbsp;blogs,&nbsp;and&nbsp;they&nbsp;said&nbsp;that&nbsp;it&nbsp;might&nbsp;be&nbsp;related&nbsp;to&nbsp;the&nbsp;bluestore_min_alloc_size_hdd&nbsp;parameter&nbsp;being&nbsp;reduced&nbsp;from&nbsp;65536&nbsp;to&nbsp;4096,&nbsp;but&nbsp;many&nbsp;articles&nbsp;do&nbsp;not&nbsp;recommend&nbsp;modifying&nbsp;this&nbsp;value&nbsp;multiple&nbsp;times&nbsp;and&nbsp;should&nbsp;be&nbsp;handled&nbsp;with&nbsp;caution,&nbsp;so&nbsp;I&nbsp;ask&nbsp;for&nbsp;help&nbsp;from&nbsp;the&nbsp;experts&nbsp;to&nbsp;confirm&nbsp;this&nbsp;problem&nbsp;for&nbsp;me,&nbsp;and&nbsp;whether&nbsp;there&nbsp;is&nbsp;a&nbsp;safer&nbsp;solution.

_______________________________________________
ceph-users&nbsp;mailing&nbsp;list&nbsp;--&nbsp;ceph-users@xxxxxxx
To&nbsp;unsubscribe&nbsp;send&nbsp;an&nbsp;email&nbsp;to&nbsp;ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux