Re: How to improve write latency on R3 pool Images

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Anthony

I am directly measuring on all rbd images from ceph node using rbd iostat.

Any parameter to improve images ?

Regards
Dev

On Fri, 1 Aug 2025 at 8:05 PM, Anthony D'Atri <anthony.datri@xxxxxxxxx>
wrote:

> How are you measuring this?
>
> VMs do add an emulation cost on devices.
>
>
> On Aug 1, 2025, at 11:03 PM, Devender Singh <devender@xxxxxxxxxx> wrote:
>
> No all’s are ssd and none
>
> Regards
> Dev
>
>
> On Fri, 1 Aug 2025 at 7:58 PM, Anthony D'Atri <anthony.datri@xxxxxxxxx>
> wrote:
>
>> What is this the output of?  Is this reading from HDDs?
>>
>> > On Aug 1, 2025, at 9:01 PM, Devender Singh <devender@xxxxxxxxxx> wrote:
>> >
>> > Hello All
>> >
>> > Using R3 pool with images running ubuntu vms.
>> >
>> > Having three nodes…
>> >
>> > I have default settings for ms_asyc threads to 3 and rbd_op_thread to 1
>> > How to improve below latency…mainly write....
>> >
>> > # sed 's/^.\{60\}//' a
>> >   WR    RD    WR_BYTES    RD_BYTES      WR_LAT      RD_LAT
>> > 50/s   0/s   275 KiB/s       0 B/s    17.36 ms     0.00 ns
>> > 45/s   4/s   230 KiB/s    56 KiB/s    66.02 ms     6.90 ms
>> > 25/s   0/s   146 KiB/s       0 B/s    68.52 ms     0.00 ns
>> > 24/s   0/s   548 KiB/s       0 B/s    22.49 ms     0.00 ns
>> > 22/s   0/s   114 KiB/s       0 B/s    20.85 ms     0.00 ns
>> > 18/s   0/s   204 KiB/s       0 B/s    48.31 ms     0.00 ns
>> > 17/s   0/s   182 KiB/s       0 B/s    10.00 ms     0.00 ns
>> > 15/s   4/s    25 MiB/s    17 KiB/s   254.61 ms    17.42 ms
>> > 12/s   0/s   268 KiB/s       0 B/s    29.67 ms     0.00 ns
>> > 12/s   2/s    13 MiB/s   8.8 KiB/s   147.05 ms     2.30 ms
>> > 11/s   0/s    77 KiB/s       0 B/s    50.42 ms     0.00 ns
>> > 11/s   0/s    74 KiB/s       0 B/s    35.54 ms     0.00 ns
>> > 10/s   0/s   101 KiB/s       0 B/s    80.83 ms     0.00 ns
>> >  9/s   0/s    65 KiB/s       0 B/s    24.81 ms     0.00 ns
>> >  7/s   0/s    37 KiB/s       0 B/s    59.11 ms     0.00 ns
>> >  7/s   0/s    33 KiB/s       0 B/s    43.95 ms     0.00 ns
>> >  6/s   8/s    31 KiB/s    34 KiB/s   131.78 ms   127.46 us
>> >  4/s   0/s    18 KiB/s       0 B/s    56.90 ms     0.00 ns
>> >  4/s   0/s    23 KiB/s       0 B/s    74.13 ms     0.00 ns
>> >  3/s   0/s    18 KiB/s       0 B/s    38.25 ms     0.00 ns
>> >  2/s   0/s    10 KiB/s       0 B/s    26.28 ms     0.00 ns
>> >  2/s   0/s    11 KiB/s       0 B/s    49.80 ms     0.00 ns
>> >  1/s   0/s    90 KiB/s       0 B/s    43.02 ms     0.00 ns
>> >  1/s   0/s   8.8 KiB/s       0 B/s    24.99 ms     0.00 ns
>> >  1/s   0/s    10 KiB/s       0 B/s   102.38 ms     0.00 ns
>> >  1/s   0/s    15 KiB/s       0 B/s    57.23 ms     0.00 ns
>> >  0/s   0/s   7.2 KiB/s       0 B/s   149.90 ms     0.00 ns
>> >  0/s   0/s     4 KiB/s       0 B/s     1.58 ms     0.00 ns
>> >  0/s   0/s   3.2 KiB/s       0 B/s     1.02 ms     0.00 ns
>> >  0/s   0/s     819 B/s       0 B/s    40.45 ms     0.00 ns
>> >
>> > Regards
>> > Dev
>> >
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux