Re: bdev_ioring -- true - Drives failing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Could you share some of the OSD log, usually there some sort of assert or
error message if it doesn’t want to come back up.

Not too familiar with that config value but I’ve seen a bug in that version
that causes osds to crash

On Thu, Jul 24, 2025 at 10:52 AM Devender Singh <devender@xxxxxxxxxx> wrote:

> Hello all
>
> Using 19.2.2, when enabling bdev_ioring and rebooting host then osd never
> comes online as below…. Why?
>
> Also Dasboard not matching with cli?
>
>  Host output in dashboard… showing running. Tried failing manager too but
> it same.
>
>
>
> Dashboard showing down
>
>
>
> # ceph health detail
> HEALTH_WARN 4 osds down; 1 host (4 osds) down; Degraded data redundancy:
> 12998/38994 objects degraded (33.333%), 287 pgs degraded, 801 pgs undersized
> [WRN] OSD_DOWN: 4 osds down
>     osd.4 (root=default,host= host07n) is down
>     osd.5 (root=default,host= host07n) is down
>     osd.6 (root=default,host= host07n) is down
>     osd.7 (root=default,host= host07n) is down
> [WRN] OSD_HOST_DOWN: 1 host (4 osds) down
>     host host07n (root=default) (4 osds) is down
>
>
> # ceph orch ps |grep -v running.    ————> not showing anything down….
> NAME                                   HOST
> PORTS             STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION
>   IMAGE ID      CONTAINER ID
>
> # systemctl list-units |grep -i osd
>   var-lib-ceph-osd-ceph\x2d4.mount
>                               loaded active     mounted
> /var/lib/ceph/osd/ceph-4
>   var-lib-ceph-osd-ceph\x2d5.mount
>                               loaded active     mounted
> /var/lib/ceph/osd/ceph-5
>   var-lib-ceph-osd-ceph\x2d6.mount
>                               loaded active     mounted
> /var/lib/ceph/osd/ceph-6
>   var-lib-ceph-osd-ceph\x2d7.mount
>                               loaded active     mounted
> /var/lib/ceph/osd/ceph-7
>   ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.4.service
>                                loaded activating auto-restart Ceph osd.4
> for 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.5.service
>                                loaded activating auto-restart Ceph osd.5
> for 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.6.service
>                                loaded activating auto-restart Ceph osd.6
> for 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.7.service
>                                loaded activating auto-restart Ceph osd.7
> for 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   system-ceph\x2dosd.slice
>                               loaded active     active       Slice
> /system/ceph-osd
>   ceph-osd.target
>                              loaded active     active       ceph target
> allowing to start/stop all ceph-osd@.service instances at once
>
>
> Came to running …..
>
> # systemctl list-units |grep -i osd
>   var-lib-ceph-osd-ceph\x2d4.mount
>                               loaded active mounted
>  /var/lib/ceph/osd/ceph-4
>   var-lib-ceph-osd-ceph\x2d5.mount
>                               loaded active mounted
>  /var/lib/ceph/osd/ceph-5
>   var-lib-ceph-osd-ceph\x2d6.mount
>                               loaded active mounted
>  /var/lib/ceph/osd/ceph-6
>   var-lib-ceph-osd-ceph\x2d7.mount
>                               loaded active mounted
>  /var/lib/ceph/osd/ceph-7
>   ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.4.service
>                                loaded active running   Ceph osd.4 for
> 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.5.service
>                                loaded active running   Ceph osd.5 for
> 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.6.service
>                                loaded active running   Ceph osd.6 for
> 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.7.service
>                                loaded active running   Ceph osd.7 for
> 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   system-ceph\x2dosd.slice
>                               loaded active active    Slice /system/ceph-osd
>   ceph-osd.target
>                              loaded active active    ceph target allowing
> to start/stop all ceph-osd@.service instances at once
>
> Failed …..
>
> # systemctl list-units |grep -i osd
>   var-lib-ceph-osd-ceph\x2d4.mount
>                               loaded active mounted
>  /var/lib/ceph/osd/ceph-4
>   var-lib-ceph-osd-ceph\x2d5.mount
>                               loaded active mounted
>  /var/lib/ceph/osd/ceph-5
>   var-lib-ceph-osd-ceph\x2d6.mount
>                               loaded active mounted
>  /var/lib/ceph/osd/ceph-6
>   var-lib-ceph-osd-ceph\x2d7.mount
>                               loaded active mounted
>  /var/lib/ceph/osd/ceph-7
> ● ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.4.service
>                                loaded failed failed    Ceph osd.4 for
> 3b850efe-5dec-11f0-af3c-c1a764f7824e
> ● ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.5.service
>                                loaded failed failed    Ceph osd.5 for
> 3b850efe-5dec-11f0-af3c-c1a764f7824e
> ● ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.6.service
>                                loaded failed failed    Ceph osd.6 for
> 3b850efe-5dec-11f0-af3c-c1a764f7824e
> ● ceph-3b850efe-5dec-11f0-af3c-c1a764f7824e@osd.7.service
>                                loaded failed failed    Ceph osd.7 for
> 3b850efe-5dec-11f0-af3c-c1a764f7824e
>   system-ceph\x2dosd.slice
>                               loaded active active    Slice /system/ceph-osd
>   ceph-osd.target
>                              loaded active active    ceph target allowing
> to start/stop all ceph-osd@.service instances at once
>
> Regards
> Dev
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux