Re: How to remove failed OSD & reuse it?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm bit confused with 'un/manged'
I had earlier - although I exec it after 'zapping' disk
-> $ ceph orch apply osd --all-available-devices
so ceph automatically re-deployed two "new" osd on that "failed" host.
Now everything is 'HEALTH_OK' yet still:
-> $ ceph orch ls | egrep osd
osd                                           4  6m ago     - <unmanaged> osd.all-available-devices                     2  3m ago     28m *

How does one square these things together? What is happening here? Where/how actually 'managed' osd happens (also with config setting/showing) takes place?

many thanks, L.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux