Re: ceph deployment best practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Janne,
                     Thanks for your advice.

So, you mean with with K=4 M =2 EC, we need 8 OSD nodes to have better
protection

Thanks,
Gagan



On Tue, 22 Apr, 2025, 7:22 pm Janne Johansson, <icepic.dz@xxxxxxxxx> wrote:

> > So, I need to know what will be data safely level with the above set-up (
> > i.e.  6 OSDs with  4X2 EC  ). How many OSDs ( disks ) and nodes failure ,
> > above set-up can withstand.
>
> With EC N+2 you can lose one drive or host, and the cluster will go on
> with degraded mode until it has been able to recreate the missing data
> on another OSD, if you lose two drives or hosts, I believe the EC pool
> with go readonly, again until it has rebuilt copies elsewhere.
>
> Still, if you have EC 4+2 and only 6 OSD hosts, this means if a host
> dies, the cluster can not recreate data anywhere without violating
> "one copy per host" default placement, so the cluster will be degraded
> until this host comes back or another one replaces it. For a N+M EC
> cluster, I would suggest having N+M+1 or even +2 number of hosts, so
> that you can do maintenance on a host or lose a host and still be able
> to recover without visiting the server room.
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux