Hi,
I'm glad it worked for you as well. :-)
Thanks,
Eugen
Zitat von gagan tiwari <gagan.tiwari@xxxxxxxxxxxxxxxxxx>:
Hi Eugen,
Thanks for your help! Everything went well and the
cluster came back without any problems.
Thanks,
Gagan
On Thu, Aug 7, 2025 at 8:03 PM gagan tiwari <gagan.tiwari@xxxxxxxxxxxxxxxxxx>
wrote:
Hi Eugen,
Thanks for clarifying! I will just set the noout
flag and power off all servers. Hopefully, it will work fine.
Thanks,
Gagan
On Thu, Aug 7, 2025 at 5:59 PM Eugen Block <eblock@xxxxxx> wrote:
No, that's not what I meant. Of course you need the noout flag, that's
the first point on the list I sent. Other flags are not required as
pointed out by croit.
I have never disabled cephfs before shutting it down as mentioned by
Gerard, so I can't comment on that.
The procedure I quoted from SUSE is still working fine, that's how we
power off our (cephadm) based clusters as well.
Zitat von gagan tiwari <gagan.tiwari@xxxxxxxxxxxxxxxxxx>:
> Hi Eugen,
> I had deployed the cluster via cephadm. so, it's
> container based. Just fyi..
>
> So, you meant I don't need to run any commands like "ceph osd set
noout" ,
> "ceph orch stop osd.<ID>" ,etc.
>
> I just need to power off all nodes starting clients nodes , osd nodes ,
mon
> , mds nodes and power them on after maintenance activity is over.
>
> Thanks,
> Gagan
>
> On Wed, Aug 6, 2025 at 11:53 PM Eugen Block <eblock@xxxxxx> wrote:
>
>> You don't need to stop all the OSDs (or other daemons) manually, just
>> shut down the servers (most likely you have services colocated). When
>> they boot again, the Ceph daemons will also start automatically
>> (they're handled by systemd). I can check tomorrow which steps exactly
>> our shutdown procedure consists of when we have planned power outages
>> etc.
>>
>> Zitat von gagan tiwari <gagan.tiwari@xxxxxxxxxxxxxxxxxx>:
>>
>> > HI Eugen,
>> > We have 80 osds in the cluster.
>> >
>> > So, to stop them I will need to run the command *ceph orch stop
>> osd.<ID>*
>> > for all 80 osds one by one.
>> > Is there any way to stop all of them in one command ?
>> >
>> > Also, once all nodes will be back after power maintenance activity ,
>> will I
>> > need to start all daemons mon , mds , osd,etc
>> > ? or will they be up automatically once servers will be up ?
>> >
>> > Thanks,
>> > Gagan
>> >
>> > On Wed, Aug 6, 2025 at 4:45 PM Eugen Block <eblock@xxxxxx> wrote:
>> >
>> >> Although SUSE discontinued their product, the procedure is still
correct
>> >> [0]:
>> >>
>> >> 1. Tell the Ceph cluster not to mark OSDs as out:
>> >>
>> >> ceph osd set noout
>> >>
>> >> 2. Stop daemons and nodes in the following order:
>> >>
>> >> Storage clients
>> >>
>> >> Gateways, for example NFS Ganesha or Object Gateway
>> >>
>> >> Metadata Server
>> >>
>> >> Ceph OSD
>> >>
>> >> Ceph Manager
>> >>
>> >> Ceph Monitor
>> >>
>> >> 3. If required, perform maintenance tasks.
>> >>
>> >> 4. Start the nodes and servers in the reverse order of the shutdown
>> >> process:
>> >>
>> >> Ceph Monitor
>> >>
>> >> Ceph Manager
>> >>
>> >> Ceph OSD
>> >>
>> >> Metadata Server
>> >>
>> >> Gateways, for example NFS Ganesha or Object Gateway
>> >>
>> >> Storage clients
>> >>
>> >> 5. Remove the noout flag:
>> >>
>> >> ceph osd unset noout
>> >>
>> >> [0]
>> >>
>> >>
>>
https://documentation.suse.com/en-us/ses/7.1/html/ses-all/storage-salt-cluster.html#sec-salt-cluster-reboot
>> >>
>> >> Zitat von gagan tiwari <gagan.tiwari@xxxxxxxxxxxxxxxxxx>:
>> >>
>> >> > Hi Guys,
>> >> > I have recently set-up a production ceph cluster
>> which
>> >> > consists of 3 monitor nodes and 7 Osd nodes.
>> >> >
>> >> > There is power maintenance activity scheduled at the data centre
>> coming
>> >> > weekend and due to that I need to power off all the devices.
>> >> >
>> >> > Can you please advise me on the safest way to power off all
servers?
>> >> >
>> >> > Should I power off all 7 OSD servers one by one followed by all 3
>> monitor
>> >> > nodes or vice versa?
>> >> >
>> >> > Thanks,
>> >> > Gagan
>> >> > _______________________________________________
>> >> > ceph-users mailing list -- ceph-users@xxxxxxx
>> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >>
>> >>
>> >> _______________________________________________
>> >> ceph-users mailing list -- ceph-users@xxxxxxx
>> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >>
>>
>>
>>
>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx