Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> Migrating DB volume using ceph-bluestore-tool was a wrong step.


What's the alternative?  Redeploying an entire cluster of spinners doesn't seem very feasible.


>> It doesn't setup LV tags for underlying volumes which prevents preper OSD devices detection after reboot.
>> 
>> One should set these tags manually using lvchange --addtag command. To a major degree DB tags are similar to ones for the block device. But some additional tuning is still required.
>> 
>> Unfortunately AFAIK there is no full how-to-do manual available. One of Eugene's link covers the topic just partly. So you should rather use existing valid OSD deployment as a reference.

Seems like justification for a tracker bug.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux