Re: After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is baked into each OSD when created, the only way to change the effective value is to recreate the OSD.

`ceph osd metadata` will show you the value with which each OSD was created.

The change was done in part so that RGW bucket data pools with very small S3 objects don't suffer massive space amp

> On Sep 8, 2025, at 6:33 AM, Best Regards <wu_chulin@xxxxxx> wrote:
> 
> I searched a lot of blogs, and they said that it might be related to the bluestore_min_alloc_size_hdd parameter being reduced from 65536 to 4096, but many articles do not recommend modifying this value multiple times and should be handled with caution, so I ask for help from the experts to confirm this problem for me, and whether there is a safer solution.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux