After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 


We used Ceph's RBD feature in triple-replica mode. Every day at midnight, the virtual machine guest_os has a logrotate task to cut and compress various logs. Before the upgrade, at midnight, IO latency increased from 7ms to around 15ms, which had a minor impact.


However, after upgrading to version Q, latency jumped from 7ms to 100ms, with some OSDs even experiencing latency reaching 200ms, significantly impacting guest_os operations. We upgraded from O to P, and then from P to Q the next day. Version P ran for a day with normal latency, but after upgrading to version Q, latency increased dramatically (this refers to the midnight period; there was no noticeable difference at other times).


I checked the monitoring data before and after the upgrade, from the pool level to the OSD level and then to the driver level, and found that the ceph_pool_rd indicator and the ceph_osd_op_r (all OSDs associated with the pool) indicator did not show any significant increase before and after the upgrade, but the node_disk_reads_completed_total indicator (associated with the OSD) increased significantly after the upgrade. Then I randomly selected several related OSDs and checked the monitoring data before and after the upgrade, and found that the number of read-type IOs did increase, almost doubling from the original 150 ops to 300+ ops, which seems to exceed the capacity of the HDD and the latency is very large.


I searched a lot of blogs, and they said that it might be related to the bluestore_min_alloc_size_hdd parameter being reduced from 65536 to 4096, but many articles do not recommend modifying this value multiple times and should be handled with caution, so I ask for help from the experts to confirm this problem for me, and whether there is a safer solution.


Thanks.


Best Regards
wu_chulin@xxxxxx

Best Regards
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux