Re: Doubled numbers of PGs from 8192 to 16384 - backfill bottlenecked

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Torkil,

We noticed that any backfilling pgs are super slow on erasure coded pools that run on HDDs.
Is there any visible progress on the movement itself without pgremapper reductions?

You can try using https://github.com/TheJJ/ceph-balancer with ’showremapped’ to get a full status of what’s going on with the backfills.


Best,
Laimis J.

> On 30 Apr 2025, at 00:54, Torkil Svensgaard <torkil@xxxxxxxx> wrote:
> 
> pool 11 'rbd_ec_data' erasure profile DRCMR_k4m2 size 6 min_size 5 crush_rule 0 object_hash rjenkins pg_num 16384 pgp_num 16384 autoscale_mode off last_change 2832704 lfor 0/1291190/2832700 flags hashpspool,ec_overwrites,selfmanaged_snaps,bulk stripe_width 16384 fast_read 1 compression_algorithm snappy compression_mode aggressive application rbd

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux