Re: Ceph usage doubled after update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Don’t use legacy override reweights.  When you have upmap balancing enabled they confuse the balancer.

Look for `rados bench` leftovers

> On May 5, 2025, at 6:01 AM, Yunus Emre Sarıpınar <yunusemresaripinar@xxxxxxxxx> wrote:
> 
> I have a ceph cluster created with nautilus and I upgraded to octopus later.
> 
> I was have 24x node and I added 8x new nodes to the cluster. Balancer is upmap enabled. I increased my PG count from 8192 to 16384.
> 
> I had to set reweight 0.8 on new OSD's to solve full usage (There was only 1TB left because of the faulted balance)
> 
> Now cluster health is OK and distribution is balanced by my manual fix.
> 
>   data:
>     pools:   4 pools, 16545 pgs
>     objects: 173.32M objects, 82 TiB
>     usage:   436 TiB used, 235 TiB / 671 TiB avail
>     pgs:     15930 active+clean
>              579   active+clean+snaptrim_wait
>              36    active+clean+snaptrim
> 
> Before adding 8 nodes to the cluster, the usage was 215 TiB used. I did the update about 2 months ago and my usage area still hasn't decreased.
> 
> Why did the usage double,  how can I solve it?
> 
>  
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux