Hi, Thanks for your reply. I will check if there are any changes in other parameters. Thank you. Best Regards wu_chulin@xxxxxx Best Regards 原始邮件 发件人:Anthony D'Atri <anthony.datri@xxxxxxxxx> 发件时间:2025年9月8日 22:59 收件人:Best Regards <wu_chulin@xxxxxx> 抄送:ceph-users <ceph-users@xxxxxxx> 主题: Re: After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods. This is baked into each OSD when created, the only way to change the effective value is to recreate the OSD. `ceph osd metadata` will show you the value with which each OSD was created. The change was done in part so that RGW bucket data pools with very small S3 objects don't suffer massive space amp > On Sep 8, 2025, at 6:33 AM, Best Regards <wu_chulin@xxxxxx> wrote: > > I searched a lot of blogs, and they said that it might be related to the bluestore_min_alloc_size_hdd parameter being reduced from 65536 to 4096, but many articles do not recommend modifying this value multiple times and should be handled with caution, so I ask for help from the experts to confirm this problem for me, and whether there is a safer solution. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx