Re: Ceph upgrade OSD unsafe to stop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

'ceph osd ok-to-stop' is a safety check, nothing more. It basically checks if PGs would become inactive if you stopped said OSD or if those PGs would become only degraded. Which OSD does report that it's unsafe to stop? Can you paste the output of 'ceph osd ok-to-stop <OSD_ID>'? And with that also 'ceph osd pool ls detail' to see which pool(s) is/are affected. And 'ceph osd df tree' can also be useful here.

Regards,
Eugen

Zitat von "GLE, Vivien" <Vivien.GLE@xxxxxxxx>:

Hi,


I'm trying to update my cluster (19.2.2 -> 19.2.3), mon and mgr upgrade goes well but I had some issue with OSD :


Upgrade: unsafe to stop osd(s) at this time (165 PGs are or would become offline)


Cluster is in health_ok

All pools are replica 3 and pgs active+clean

autoscaler is off following the ceph docs


Does ceph osd ok-to-stop lead to lost data ?


The only rules used in the cluster is replicated_rule :


root@ceph-monitor-1:/# ceph osd crush rule dump replicated_rule

{
    "rule_id": 0,
    "rule_name": "replicated_rule",
    "type": 1,
    "steps": [
        {
            "op": "take",
            "item": -1,
            "item_name": "inist"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

root@ceph-monitor-1:/# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
 -1         6.63879  root inist
-15         0.90970      host ceph-monitor-1
6 hdd 0.90970 osd.6 up 1.00000 1.00000
-21         5.72910      datacenter bat1
-20         2.72910          room room01
-19         2.72910              row left
-18         2.72910                  rack 10
 -3         0.90970                      host ceph-node-1
2 hdd 0.90970 osd.2 up 1.00000 1.00000
 -5         0.90970                      host ceph-node-2
1 hdd 0.90970 osd.1 up 1.00000 1.00000
 -9         0.90970                      host ceph-node-3
5 hdd 0.90970 osd.5 up 1.00000 1.00000
-36         3.00000          room room03
-35         3.00000              row left06
-34         3.00000                  rack 08
 -7         1.00000                      host ceph-node-4
0 hdd 1.00000 osd.0 up 1.00000 1.00000
-13         1.00000                      host ceph-node-5
3 hdd 1.00000 osd.3 up 1.00000 1.00000
-11         1.00000                      host ceph-node-6
4 hdd 1.00000 osd.4 up 1.00000 1.00000

Thanks !

Vivien



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux