Re: OSD failed: still recovering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK, so just an update that the recovery did finally complete, and I am pretty sure that the "inconsistent" PGs were PGs that the failed OSD were part of.  Running 'ceph pg repair' has them sorted out, along with the 600+ "scrub errors" I had.

I was able to remove the OSD from the cluster, and am now just awaiting a replacement drive.  My cluster is now showing healthy.

Related question: the OSD had its DB/WAL on a partition on an SSD.  Would I just "zap" the partition like I would a drive, so it is available to be used again when I replace the HDD, or is there another method for "reclaiming" that DB/WAL partition?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux