Re: Ceph usage doubled after update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This isn't directly addressing your question, but tangentially related. I'm not sure of your use case, but you may have some significant space amplification related to the default 64k allocation size in pre-Pacific. You may want to consider upgrading to a later release (at least Pacific) and then "repaving" the cluster with the 4k allocation size defaults (basically replacing each OSD server at a time/OSD at a time/whatever works for your topology and safety factors/redundancy).

Just be aware that the upmap automatic balancing doesn't take into account different allocation sizes, and balances on PG counts, so you'll need to disable it until fully repaved (and potentially engage in manual balancing operations with somthing like the JJ balancer). I suggest this because you may get back a decent amount of space, depending on your current setup.

David

On Mon, May 5, 2025, at 05:01, Yunus Emre Sarıpınar wrote:
> I have a ceph cluster created with nautilus and I upgraded to octopus later.
>
> I was have 24x node and I added 8x new nodes to the cluster. Balancer is
> upmap enabled. I increased my PG count from 8192 to 16384.
>
> I had to set reweight 0.8 on new OSD's to solve full usage (There was only
> 1TB left because of the faulted balance)
>
> Now cluster health is OK and distribution is balanced by my manual fix.
>
>   data:
>     pools:   4 pools, 16545 pgs
>     objects: 173.32M objects, 82 TiB
>     usage:   436 TiB used, 235 TiB / 671 TiB avail
>     pgs:     15930 active+clean
>              579   active+clean+snaptrim_wait
>              36    active+clean+snaptrim
>
> Before adding 8 nodes to the cluster, the usage was 215 TiB used. I did the
> update about 2 months ago and my usage area still hasn't decreased.
>
> Why did the usage double,  how can I solve it?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux