Re: Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok...
I am doing it again.
I have 2 osd per node.
Do I need to create multiple folder for each osd?
Like
node1:
mon-store/ceph-osd0
mon-store/ceph-osd1
node2:
mon-store/ceph-osd2
mon-store/ceph-osd3
node3:
mon-store/ceph-osd3
mon-store/ceph-osd3

And than rsynced everything to one node let's say:
nove1: /root/mon-store?

Which one I should use in order to restore or recreate the mon?

Sorry for so many questions.
I am trying to understand the whole process, so bare with me.

Thanks for your patience.



---


Gilberto Nunes Ferreira
+55 (47) 99676-7530 - Whatsapp / Telegram






Em qua., 20 de ago. de 2025 às 14:35, Eugen Block <eblock@xxxxxx> escreveu:

> I feel like there's still a misunderstanding here.
>
> The mentioned procedure is:
>
> ms=/root/mon-store
> mkdir $ms
>
> # collect the cluster map from stopped OSDs
> for host in $hosts; do
>    rsync -avz $ms/. user@$host:$ms.remote
>    rm -rf $ms
>    ssh user@$host <<EOF
>      for osd in /var/lib/ceph/osd/ceph-*; do
>        ceph-objectstore-tool --data-path \$osd --no-mon-config --op
> update-mon-db --mon-store-path $ms.remote
>      done
> EOF
>    rsync -avz user@$host:$ms.remote/. $ms
> done
>
>
> It collects the clustermap on each host, querying each OSD, but then
> it "merges" it into one store, the local $ms store. That is used then
> to start up the first monitor. So however you do this, make sure you
> have all the clustermaps in one store. Did you stop the newly created
> mon first? And I don't care about the ceph-mon.target, that's always
> on to ensure the MON starts automatically after boot.
>
> Can you clarify that you really have all the clustermaps in one store?
> If not, you'll need to repeat the steps. In theory the steps should
> work exactly as they're described.
>
> Zitat von Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx>:
>
> > That's strange.
> > Now I have only the ceph-mon.target available:
> >
> > systemctl status ceph-mon.target
> > ● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@
> .service
> > instances at once
> >      Loaded: loaded (/usr/lib/systemd/system/ceph-mon.target; enabled;
> > preset: enabled)
> >      Active: active since Wed 2025-08-20 14:07:12 -03; 1min 47s ago
> >  Invocation: 1fcbb21af715460294bd6d8549557ed9
> >
> > Notice: journal has been rotated since unit was started, output may be
> > incomplete.
> >
> >>>> And you did rebuild the store from all OSDs as I mentioned, correct?
> > Yes...
> > Like that:
> >
> > ceph-volume lvm activate --all
> > mkdir /root/mon-store
> > ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0
> --no-mon-config
> > --op update-mon-db --mon-store-path mon-store/
> > ceph-monstore-tool mon-store/ rebuild -- --keyring
> > /etc/pve/priv/ceph.client.admin.keyring --mon-ids pve01 pve02 pve03
> > mv /var/lib/ceph/mon/ceph-pve01/store.db/
> > /var/lib/ceph/mon/ceph-pve01/store.db-bkp
> > cp -rf mon-store/store.db/ /var/lib/ceph/mon/ceph-pve01/
> > chown -R ceph:ceph /var/lib/ceph/mon/ceph-pve01/store.db
> >
> > On each node.
> > ---
> >
> >
> > Gilberto Nunes Ferreira
> > +55 (47) 99676-7530 - Whatsapp / Telegram
> >
> >
> >
> >
> >
> >
> > Em qua., 20 de ago. de 2025 às 13:49, Eugen Block <eblock@xxxxxx>
> escreveu:
> >
> >> What does the monitor log? Does it at least start successfully? And
> >> you did rebuild the store from all OSDs as I mentioned, correct?
> >>
> >> Zitat von Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx>:
> >>
> >> > Hi again...
> >> > I have reinstall all Proxmox nodes and install ceph on each node.
> >> > Create the mons and mgr on eatch node.
> >> > I have issue the command ceph-volume lvm activate --all, on each
> node, in
> >> > order bring up the /var/lib/ceph/osd/<node>
> >> > After that I ran this commands:
> >> > ceph-volume lvm activate --all
> >> > mkdir /root/mon-store
> >> > ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0
> >> --no-mon-config
> >> > --op update-mon-db --mon-store-path mon-store/
> >> > ceph-monstore-tool mon-store/ rebuild -- --keyring
> >> > /etc/pve/priv/ceph.client.admin.keyring --mon-ids pve01 pve02 pve03
> >> > mv /var/lib/ceph/mon/ceph-pve01/store.db/
> >> > /var/lib/ceph/mon/ceph-pve01/store.db-bkp
> >> > cp -rf mon-store/store.db/ /var/lib/ceph/mon/ceph-pve01/
> >> > chown -R ceph:ceph /var/lib/ceph/mon/ceph-pve01/store.db
> >> >
> >> > But now I got nothing!
> >> > No monitor, no manager, no osd, none!
> >> >
> >> > Perhaps somebody point me what I did wrong.
> >> >
> >> > Thanks
> >> >
> >> > Em qua., 20 de ago. de 2025 às 11:32, Gilberto Ferreira <
> >> > gilberto.nunes32@xxxxxxxxx> escreveu:
> >> >
> >> >> I can see the content of the mentioned folders just after issue the
> >> >> command ceph-volume....
> >> >> Thanks anyway.
> >> >>
> >> >>
> >> >>
> >> >> Em qua., 20 de ago. de 2025 às 11:26, Eugen Block <eblock@xxxxxx>
> >> >> escreveu:
> >> >>
> >> >>> I assume you're right. Do you see the OSD contents in
> >> >>> /var/lib/ceph/osd/ceph-pve01 after activating?
> >> >>> And remember to collect the clustermap from all OSDs for this
> >> >>> procedure to succeed.
> >> >>>
> >> >>> Zitat von Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx>:
> >> >>>
> >> >>> > I see...
> >> >>> >
> >> >>> > But I had another problem.
> >> >>> > The script from (0) indicate that should be exist a
> /var/lib/ceph/osd
> >> >>> > folder, like:
> >> >>> > /var/lib/ceph/osd/ceph-pve01
> >> >>> > /var/lib/ceph/osd/ceph-pve02
> >> >>> > and so on.
> >> >>> >
> >> >>> > But this folder appears only if I run ceph-volume lvm activate
> --all.
> >> >>> > So my question is: when I should run this command: after or before
> >> use
> >> >>> the
> >> >>> > script?
> >> >>> > I think I need to run ceph-volume lvm activate --all, right?
> >> >>> > Just to clarify.
> >> >>> >
> >> >>> > Thanks
> >> >>> >
> >> >>> > Em qua., 20 de ago. de 2025 às 11:08, Eugen Block <eblock@xxxxxx>
> >> >>> escreveu:
> >> >>> >
> >> >>> >> Yes, you need a monitor. The mgr is not required and can be
> deployed
> >> >>> >> later. After you created the monitor, replace the mon store
> contents
> >> >>> >> by the collected clustermaps from the mentioned procedure. Keep
> the
> >> >>> >> ownerships of the directories/files in mind. If the monitor
> starts
> >> >>> >> successfully (with the original FSID), you can try to start one
> of
> >> the
> >> >>> >> OSDs. If that works, start the rest of them, wait for the peering
> >> >>> >> storm to settle, create two more monitors and two mgr daemons.
> >> >>> >>
> >> >>> >> Note that if you lose the mon store and you had a CephFS, you'll
> >> need
> >> >>> >> to recreate that from the existing pools.
> >> >>> >>
> >> >>> >> Zitat von Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx>:
> >> >>> >>
> >> >>> >> > Hi
> >> >>> >> >
> >> >>> >> > Do I need to create any mon and/or mgr in the new ceph cluster?
> >> >>> >> >
> >> >>> >> >
> >> >>> >> >
> >> >>> >> > Em seg., 18 de ago. de 2025 às 13:03, Eugen Block <
> eblock@xxxxxx>
> >> >>> >> escreveu:
> >> >>> >> >
> >> >>> >> >> Hi,
> >> >>> >> >>
> >> >>> >> >> this sounds like you created a new cluster (new fsid), the
> OSDs
> >> >>> still
> >> >>> >> >> have the previous fsid configured. I'd rather recommend to
> follow
> >> >>> this
> >> >>> >> >> procedure [0] to restore the mon store utilizing OSDs rather
> than
> >> >>> >> >> trying to manipulate otherwise intact OSDs to fit into the
> "new"
> >> >>> >> >> cluster. That way you'll have "your" cluster back. I don't
> know
> >> if
> >> >>> >> >> there are any specifics to using proxmox, though. But the
> >> mentioned
> >> >>> >> >> procedure seems to work just fine, I've read multiple reports
> on
> >> >>> this
> >> >>> >> >> list. Luckily, I haven't had to use it myself.
> >> >>> >> >>
> >> >>> >> >> Regards,
> >> >>> >> >> Eugen
> >> >>> >> >>
> >> >>> >> >> [0]
> >> >>> >> >>
> >> >>> >> >>
> >> >>> >>
> >> >>>
> >>
> https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds
> >> >>> >> >>
> >> >>> >> >> Zitat von Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx>:
> >> >>> >> >>
> >> >>> >> >> > Hi
> >> >>> >> >> >
> >> >>> >> >> > I have 3 nodes Proxmox Cluster with CEPH, and after a
> crash, I
> >> >>> have to
> >> >>> >> >> > reinstall Proxmox from scratch, along with Ceph.
> >> >>> >> >> > OSD are intact.
> >> >>> >> >> > I already did ceph-volume lvm activate --all and the OSD
> >> appears
> >> >>> with
> >> >>> >> >> > ceph-volum lvm list and I got a folder with the name of the
> OSD
> >> >>> under
> >> >>> >> >> > /var/lib/ceph/osd.
> >> >>> >> >> > However is not appear in ceph osd tree or ceph -s or even in
> >> the
> >> >>> web
> >> >>> >> gui.
> >> >>> >> >> > Is there any way to re-add this OSD to Proxmox CEPH?
> >> >>> >> >> >
> >> >>> >> >> > Thanks a lot for any help.
> >> >>> >> >> >
> >> >>> >> >> >
> >> >>> >> >> > Best Regards
> >> >>> >> >> > ---
> >> >>> >> >> > Gilbert
> >> >>> >> >> > _______________________________________________
> >> >>> >> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> >>> >> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>> >> >>
> >> >>> >> >>
> >> >>> >> >> _______________________________________________
> >> >>> >> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> >>> >> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>> >> >>
> >> >>> >> > _______________________________________________
> >> >>> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> >>> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>> >>
> >> >>> >>
> >> >>> >> _______________________________________________
> >> >>> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> >>> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>> >>
> >> >>> > _______________________________________________
> >> >>> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> >>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>>
> >> >>>
> >> >>> _______________________________________________
> >> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>>
> >> >>
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux