Re: Network traffic with failure domain datacenter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To be pedantic … backfill usually means copying data in toto, so like normal write replication it necessarily has to traverse the WAN.

Recovery of just a lost shard/replica in theory with the LRC plugin, but as noted that doesn’t seem like a good choice.  With the default EC plugin, there *may* be some read locality preference but it’s not something I would bank on.

Stretch clusters are great when you need zero RPO when you really need a single cluster and can manage client endpoint use accordingly.  But with tradeoffs, in many cases two clusters with async replication can be a better solution, depends on needs and what you’re solving for.

> On May 7, 2025, at 5:06 AM, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
> 
> Den ons 7 maj 2025 kl 10:59 skrev Torkil Svensgaard <torkil@xxxxxxxx>:
>> We are looking at a cluster split between two DCs with the DCs as
>> failure domains.
>> 
>> Am I right in assuming that any recovery or backfill taking place should
>> largely happen inside each DC and not between them? Or can no such
>> assumptions be made?
>> Pools would be EC 4+8, if that matters.
> 
> Unless I am mistaken, the first/primary of each PG is the one "doing"
> the backfills, so if the primaries are evenly distributed between the
> sites, the source of all backfills would be in the remote DC in 50% of
> the cases.
> I do not think the backfills are going to calculate how it can use
> only "local" pieces to rebuild a missing/degraded PG piece without
> going over the DC-DC link even if it is theoretically possible.
> 
> -- 
> May the most significant bit of your life be positive.

It’s good to be 8-bit-clean, if you aren’t , then Kermit can compensate.

> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux