Re: CEPH performance all Flash lower than local

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

Keep in mind that when you compare a local NVMe with RBD you are adding to the mix the following:
- Network connectivity (link speed and bandwidth, switching latency, ...)
- All the lines of code that make Ceph (both client and server side)
- The ability to process the lines of code above as fast as possible (CPU dependent)

So you are comparing two things that are radically different in terms of features and architecture.

The Ceph community is working hard to make Ceph always better and you can take a look at the Crimson project <https://ceph.io/en/news/crimson/> that aims at improving the OSD side when it comes to latency and performance for flash based Ceph clusters.

Best
JC

> On Jul 23, 2025, at 15:52, Devender Singh <devender@xxxxxxxxxx> wrote:
> 
> root@node01:~/fio-cdm# python3 fio-cdm ./
> tests: 5, size: 1.0GiB, target: /root/fio-cdm 6.3GiB/64.4GiB
> |Name        |  Read(MB/s)| Write(MB/s)|
> |------------|------------|------------|
> |SEQ1M Q8 T1 |     8441.37|     3588.71|
> |SEQ1M Q1 T1 |     3074.86|     1172.46|
> |RND4K Q32T16|      723.65|      733.76|
> |. IOPS      |   176671.80|   179141.74|
> |. latency us|     2892.49|     2839.37|
> |RND4K Q1 T1 |       71.05|       57.88|
> |. IOPS      |    17347.13|    14131.57|
> |. latency us|       56.13|       66.40|

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux