Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>Of course, but remember that Ceph stores data across multiple OSDs and hosts for just that
>reason. 

Yes, but when nvme with 5 (just fot instance) OSD's db+wal on it goes down - all 5 OSDs go down as well. Keeping in mind, that each of those OSDs are 16TB HDDs, in turn it brings alive massive recovery byteflow. This is what I'd like to avoid. One member of md raid dies - no worries, we have another one which (in theory) should substitute the dead one in-flight.

>You're still burning the SSD endurance twice as quickly.
Indeed,you are right! But in my opinion it is better to hold additional costs of a couple of  new nvme disks instead of making clients to suffer.

> The system would have cost less and been more reliable were it all-NVMe with no HBA.

in my case I need thick and cheap cold storage and all-flash cluster cannot beat the price :(
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux