Changing the failure domain of an EC cluster still shows old profile

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I made an EC 4+2 cluster with `crush-failure-domain=host`.

Later after adding more machines, I changed it from `host` to `datacenter`:

    ceph osd erasure-code-profile set my_ec_profile_datacenter k=4 m=2 crush-failure-domain=datacenter crush-device-class=hdd
    ceph osd crush rule create-erasure rule_my_data_ec_datacenter my_ec_profile_datacenter
    ceph osd pool set my_data_ec crush_rule rule_my_data_ec_datacenter

This seems to have worked, and `ceph osd pool get my_data_ec crush_rule` outputs:

    crush_rule: rule_my_data_ec_datacenter

But `ceph osd pool ls detail` still shows

    pool 3 'my_data_ec' erasure profile my_ec_profile ...

with `my_ec_profile` instead of `my_ec_profile_datacenter`.

Is this a problem?
Who wins, the profile or the crush-rule?

If it's not a problem, it is at least confusing; can I fix it somehow?

Thanks!
Niklas
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux