CEPH performance all Flash lower than local

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all 

I tried doing some fio test on local disk(NVME) and ceph rbd. Why ceph is having low IO whereas it’s also on all NVME. 
What to tune to reach equal amount of IO?

root@node01:~/fio-cdm# python3 fio-cdm ./
tests: 5, size: 1.0GiB, target: /root/fio-cdm 6.3GiB/64.4GiB
|Name        |  Read(MB/s)| Write(MB/s)|
|------------|------------|------------|
|SEQ1M Q8 T1 |     8441.37|     3588.71|
|SEQ1M Q1 T1 |     3074.86|     1172.46|
|RND4K Q32T16|      723.65|      733.76|
|. IOPS      |   176671.80|   179141.74|
|. latency us|     2892.49|     2839.37|
|RND4K Q1 T1 |       71.05|       57.88|
|. IOPS      |    17347.13|    14131.57|
|. latency us|       56.13|       66.40|
 
### When vm moved to ceph storage... 
root@node01:~/fio-cdm# python3 fio-cdm ./
tests: 5, size: 1.0GiB, target: /root/fio-cdm 9.3GiB/64.4GiB
|Name        |  Read(MB/s)| Write(MB/s)|
|------------|------------|------------|
|SEQ1M Q8 T1 |     1681.40|      889.89|
|SEQ1M Q1 T1 |      310.74|      852.11|
|RND4K Q32T16|      403.04|      274.23|
|. IOPS      |    98397.32|    66951.49|
|. latency us|     5196.98|     7637.10|
|RND4K Q1 T1 |        4.69|       45.18|
|. IOPS      |     1146.17|    11029.39|
|. latency us|      869.47|       87.50|

  Regards
Dev

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux