Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> IMHO this isn't the right layer for this.  An admin wishing to mirror the offload
>device should do so (via MD or (sigh) an HBA) and present that device in the OSD spec. 
>ymmv.

Hello Anthony! The idea behind is to keep osds (with data on HDD) running in case of meta device goes down. For sure, my test lab with md Raid1 on 2 NS within the same NVME device is just to bring the idea and it has no sense at all in real world. In prod I mean to use, for instance, nvme0n1 and nvme1n1 united into raid1.
And the problem is that ceph-volume tries to get blkid, which is obviosly none on /dev/md127. That is my intention to modify the code.

>Conventional wisdom has favored instead offloading fewer OSDs to each SSD to reduce write
>amp and the blast radius.

Indeed, but I use fast SSD devices for wal and db.

BTW, don't you like HBA's? Cuz you did sigh when mentioned HBA?:) Why? Cuz of price?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux