Ceph usage doubled after update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a ceph cluster created with nautilus and I upgraded to octopus later.

I was have 24x node and I added 8x new nodes to the cluster. Balancer is upmap enabled. I increased my PG count from 8192 to 16384.

I had to set reweight 0.8 on new OSD's to solve full usage (There was only 1TB left because of the faulted balance)

Now cluster health is OK and distribution is balanced by my manual fix.

  data:
    pools:   4 pools, 16545 pgs
    objects: 173.32M objects, 82 TiB
    usage:   436 TiB used, 235 TiB / 671 TiB avail
    pgs:     15930 active+clean
             579   active+clean+snaptrim_wait
             36    active+clean+snaptrim

Before adding 8 nodes to the cluster, the usage was 215 TiB used. I did the update about 2 months ago and my usage area still hasn't decreased.

Why did the usage double,  how can I solve it?

 
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux