Re: Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I can see the content of the mentioned folders just after issue the command
ceph-volume....
Thanks anyway.



Em qua., 20 de ago. de 2025 às 11:26, Eugen Block <eblock@xxxxxx> escreveu:

> I assume you're right. Do you see the OSD contents in
> /var/lib/ceph/osd/ceph-pve01 after activating?
> And remember to collect the clustermap from all OSDs for this
> procedure to succeed.
>
> Zitat von Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx>:
>
> > I see...
> >
> > But I had another problem.
> > The script from (0) indicate that should be exist a /var/lib/ceph/osd
> > folder, like:
> > /var/lib/ceph/osd/ceph-pve01
> > /var/lib/ceph/osd/ceph-pve02
> > and so on.
> >
> > But this folder appears only if I run ceph-volume lvm activate --all.
> > So my question is: when I should run this command: after or before use
> the
> > script?
> > I think I need to run ceph-volume lvm activate --all, right?
> > Just to clarify.
> >
> > Thanks
> >
> > Em qua., 20 de ago. de 2025 às 11:08, Eugen Block <eblock@xxxxxx>
> escreveu:
> >
> >> Yes, you need a monitor. The mgr is not required and can be deployed
> >> later. After you created the monitor, replace the mon store contents
> >> by the collected clustermaps from the mentioned procedure. Keep the
> >> ownerships of the directories/files in mind. If the monitor starts
> >> successfully (with the original FSID), you can try to start one of the
> >> OSDs. If that works, start the rest of them, wait for the peering
> >> storm to settle, create two more monitors and two mgr daemons.
> >>
> >> Note that if you lose the mon store and you had a CephFS, you'll need
> >> to recreate that from the existing pools.
> >>
> >> Zitat von Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx>:
> >>
> >> > Hi
> >> >
> >> > Do I need to create any mon and/or mgr in the new ceph cluster?
> >> >
> >> >
> >> >
> >> > Em seg., 18 de ago. de 2025 às 13:03, Eugen Block <eblock@xxxxxx>
> >> escreveu:
> >> >
> >> >> Hi,
> >> >>
> >> >> this sounds like you created a new cluster (new fsid), the OSDs still
> >> >> have the previous fsid configured. I'd rather recommend to follow
> this
> >> >> procedure [0] to restore the mon store utilizing OSDs rather than
> >> >> trying to manipulate otherwise intact OSDs to fit into the "new"
> >> >> cluster. That way you'll have "your" cluster back. I don't know if
> >> >> there are any specifics to using proxmox, though. But the mentioned
> >> >> procedure seems to work just fine, I've read multiple reports on this
> >> >> list. Luckily, I haven't had to use it myself.
> >> >>
> >> >> Regards,
> >> >> Eugen
> >> >>
> >> >> [0]
> >> >>
> >> >>
> >>
> https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds
> >> >>
> >> >> Zitat von Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx>:
> >> >>
> >> >> > Hi
> >> >> >
> >> >> > I have 3 nodes Proxmox Cluster with CEPH, and after a crash, I
> have to
> >> >> > reinstall Proxmox from scratch, along with Ceph.
> >> >> > OSD are intact.
> >> >> > I already did ceph-volume lvm activate --all and the OSD appears
> with
> >> >> > ceph-volum lvm list and I got a folder with the name of the OSD
> under
> >> >> > /var/lib/ceph/osd.
> >> >> > However is not appear in ceph osd tree or ceph -s or even in the
> web
> >> gui.
> >> >> > Is there any way to re-add this OSD to Proxmox CEPH?
> >> >> >
> >> >> > Thanks a lot for any help.
> >> >> >
> >> >> >
> >> >> > Best Regards
> >> >> > ---
> >> >> > Gilbert
> >> >> > _______________________________________________
> >> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>
> >> >>
> >> >> _______________________________________________
> >> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux