ceph 19.2.2 - adding new hard drives messed up the order of existing ones - OSD down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I needed to add more spinning HDD to my nodes ( SuperMicro SSG-641E-E1CR36L)
and made the mistake of NOT setting up osd_auto_discovery to "false" so
ceph created OSDs on all 5 new spinning  HDD

This was an issue as I wanted to configure the OSDs the same as existing
ones ( i.e. with WAL/DB on NVME) when the other 37 drives arrive

No big harm done though because I could zap them and reconfigure ( after
running ceph orch apply osd --all-available-devices --unmaged=true) when I
will receive the remaining 37 drives ( as I will add 6 drives on each
server)

The interesting part is that, for whatever reason, one of the existing SSD
based OSD is down now because the SSD drive it used changed from
/dev/sdp to /dev/sdu therefore
there is no "block" entry under /var/lib/ceph/FSID/osd.XX

I am not sure why adding spinning disks mess up the order/naming of sdd
disks

I would appreciate some advice regarding the best course of action
to reconfigure  the OSD that is down

The cluster is healthy , not busy, with all the other OSDs  working as
expected
Has 7 hosts  with 12 SSD and 6 HDD drives each  ( one of them is the one
with issues),
 2 EC 4+2 pool , 2 MDS and few metadata pools replicated on NVME
There are also 3 NVME disks on each dedicated to pools


Many thanks
Steven
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux