Update of MDS (non-cephadm cluster)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have done the update of the ceph software several times, but this is the
first time I have to do an update (from quincy to reef) considering also
ceph-fs

I am not using cephadm


I have 3 MDSs (2 active and one in standby)

I'm looking at point 5 of:

https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/

If I have got it right It says to temporarily set max_mds to 1 and when you
have only one active:

1 - turn off the MDS service on the 2 inactive MDS nodes

2- update and restart MDS on the active instance

3- restart (after updating the software: this is not written in the doc but
it seems obvious to me) MDS on the other 2 instances

4 put max_mds back to 2

Did I understand correctly? But doesn't this mean to have a short period of
time when the cluster is in error state (in 2, during the restart) and
therefore with possible problems on the cephfs clients ?

Thanks, Massimo
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux