Re: squid 19.2.2 - drive in use but ceph is seeing it as "available"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks
dmsetup remove solved the issue
Steven

On Wed, 3 Sept 2025 at 10:47, GLE, Vivien <Vivien.GLE@xxxxxxxx> wrote:

> Hi,
>
>
> If you want to remove the ceph lv partition  you can try  :
>
>
> dmsetup remove
> ceph--dfaa321c--b04d--48e8--b78b--7c39436be0e5-osd--block--36473043--6d2c--4c7a--b61c--b057e83f0169
> ------------------------------
> *De :* Steven Vacaroaia <stef97@xxxxxxxxx>
> *Envoyé :* mercredi 3 septembre 2025 15:11:48
> *À :* ceph-users
> *Objet :*  squid 19.2.2 - drive in use but ceph is seeing it
> as "available"
>
> Hi,
>
> I would appreciate some guidance here
>
> I am trying to reconfigure OSD to put DB/WAL on NVME
> so I "out" and "ceph osd rm" them ( because  they were automatically
> configured
> and I did not "set-unamage" them properly ) then "wipefs -af " them
>
> Now CEPH is seeing drives available for deploying OSD
>  on but it fails doing it due to "device or resource busy"
>
> Attempting to destroy and zap it fails due to "device or resource busy"
> even after I "wipefs -af /dev/sdaa" it
>
> this is what it is on it ( output of lsblk)
>
> sdaa
>                            18.2T disk
>
> └─ceph--dfaa321c--b04d--48e8--b78b--7c39436be0e5-osd--block--36473043--6d2c--4c7a--b61c--b057e83f0169
> 18.2T lvm
>
> lvs,vgs does not find any logical disks /volume
> cephadm ceph-volume inventory is seeing the disks as "available = True"
> cephadm ceph-volume lvm list  is NOT finding any LVM on /dev/sdaa
> docker ps -a does not show any "left over" OSD docker
>
> There is a reference ( block link) to the
>
> /dev/ceph--dfaa321c--b04d--48e8--b78b--7c39436be0e5-osd--block--36473043--6d2c--4c7a--b61c--b057e83f016
> in /var/lib/cephFSID/removed/osd.NUMER_DATE
>
> How can I 'clean up'  the disk ?
>
> Many thanks
> Steven
>
>  cephadm ceph-volume lvm zap --destroy /dev/sdaa
> Inferring fsid 0cfa836d-68b5-11f0-90bf-7cc2558e5ce8
> Not using image
> 'sha256:4892a7ef541bbfe6181ff8fd5c8e03957338f7dd73de94986a5f15e185dacd51'
> as it's not in list of non-dangling images with ceph=True label
> Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host
> --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint
> /usr/sbin/ceph-volume --privileged --group-add=disk --init -e
> CONTAINER_IMAGE=quay.io/ceph/ceph:v19 -e NODE_NAME=ceph-host-1 -e
> CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v
> /var/run/ceph/0cfa836d-68b5-11f0-90bf-7cc2558e5ce8:/var/run/ceph:z -v
> /var/log/ceph/0cfa836d-68b5-11f0-90bf-7cc2558e5ce8:/var/log/ceph:z -v
>
> /var/lib/ceph/0cfa836d-68b5-11f0-90bf-7cc2558e5ce8/crash:/var/lib/ceph/crash:z
> -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
> /run/lock/lvm:/run/lock/lvm -v /:/rootfs:rslave -v
> /tmp/ceph-tmp419ql4q7:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v19 lvm zap
> --destroy /dev/sdaa
> /usr/bin/docker: stderr --> Zapping: /dev/sdaa
> /usr/bin/docker: stderr --> Removing all BlueStore signature on /dev/sdaa
> if any...
> /usr/bin/docker: stderr Running command: /usr/bin/ceph-bluestore-tool
> zap-device --dev /dev/sdaa --yes-i-really-really-mean-it
> /usr/bin/docker: stderr  stderr: error from zap: (16) Device or resource
> busy
> /usr/bin/docker: stderr 2025-09-02T17:41:02.385+0000 72def157f9c0 -1
> bdev(0x57b37fe52000 /dev/sdaa) open open got: (16) Device or resource busy
>
>
>
> Some extra info
>
>
> cephadm ceph-volume inventory output
>
>
> Device Path               Size         Device nodes    rotates available
> Model name
> /dev/nvme3n1              1.46 TB      nvme3n1         False   True
>  MTFDKCC1T6TGQ-1BK1DABYY
> /dev/sdaa                 18.19 TB     sdaa            True    True
>  ST20000NM007D-3D
> /dev/sdv                  18.19 TB     sdv             True    True
>  ST20000NM007D-3D
> /dev/sdw                  18.19 TB     sdw             True    True
>  ST20000NM007D-3D
> /dev/sdx                  18.19 TB     sdx             True    True
>  ST20000NM007D-3D
> /dev/sdy                  18.19 TB     sdy             True    True
>  ST20000NM007D-3D
> /dev/sdz                  18.19 TB     sdz             True    True
>  ST20000NM007D-3D
>
>
> lsblk output
> NAME
>                             SIZE TYPE MOUNTPOINT
> sdaa
>                            18.2T disk
>
> └─ceph--dfaa321c--b04d--48e8--b78b--7c39436be0e5-osd--block--36473043--6d2c--4c7a--b61c--b057e83f0169
> 18.2T lvm
>
> lvs output ( greped for lsblk /dev/sdaa and couldn't find it )
>  osd-block-29a5b1f5-d442-4756-a61d-e230832a3577
> ceph-013dc6ae-90c7-4271-b308-895bbafe8840 -wi-ao----  <6.99t
>
>   osd-block-5f7471f4-22ee-4cf9-9a16-2854a0c26496
> ceph-27dfe22b-8316-40e3-aeee-1686bf0a95b7 -wi-ao----  <6.99t
>
>   osd-block-ff6ca83d-80c5-4fff-a50d-399d94ba1b1b
> ceph-2de80ef9-8095-44be-9df2-91f458b1f1ea -wi-ao----  18.19t
>
>   osd-block-fc022b7e-3d85-4435-998e-6513020bbdf3
> ceph-57f7e6bf-cf30-4d81-8c97-d0cb503cb8f8 -wi-ao----  13.97t
>
>   osd-block-347afb43-812c-4330-a795-060e76f33f89
> ceph-5815ca32-056b-476a-8a16-64c06b9c3fe9 -wi-ao----  <6.99t
>
>   osd-block-c96901b7-8b7e-4a94-be0a-ab1316fd114e
> ceph-58a9cfda-5090-47f5-bfe8-b993d6133218 -wi-ao----  <6.99t
>
>   osd-block-5e069648-e0ec-4490-a3a9-e758d6e3e1d4
> ceph-59b5ad1b-a35b-4c46-b737-7328eee6ec45 -wi-ao----  13.97t
>
>   osd-block-5d7dc85e-dcc8-4d87-b7ba-d61201dccd5e
> ceph-6aedba8c-3c5a-476d-9e0c-9fd159c8c09e -wi-ao----  <6.99t
>
>   osd-block-d18a4abc-f723-4ac8-9d05-d02edd7e6018
> ceph-6f7e544b-9ead-40c4-82ce-646272461b36 -wi-ao----  <6.99t
>
>   osd-block-33404537-6f45-4f10-af8a-8de491e8bcc9
> ceph-7060eb95-cfb9-441b-9094-1803348ca3f2 -wi-ao----  18.19t
>
>   osd-block-153d1ed5-007c-439e-ad9e-c20d1880ff4d
> ceph-9729cc29-705c-42e5-bcc6-5d7cfd62a4ac -wi-ao----  18.19t
>
>   osd-block-15f14591-43c9-42ed-8410-d7f216019b55
> ceph-9d628846-1c8f-4c73-9c7f-6e34c69210a3 -wi-ao----  18.19t
>
>   osd-block-44c310eb-2002-4817-8b2e-beacb0a70806
> ceph-ab1ca6f2-30c8-45a7-bfec-4e09089ec856 -wi-ao----  <6.99t
>
>   osd-block-b55d7d9f-8e2a-4da5-b9c5-ace96975286d
> ceph-ba4098bf-6808-4a65-93a6-e84f3ff7936a -wi-ao----  13.97t
>
>   osd-block-a97d2620-d07a-4bd7-8f19-66801f5d613b
> ceph-c93d909a-b944-4dc5-a9b2-ed3441226c46 -wi-ao----  18.19t
>
>   osd-block-5c134f31-3a17-4fe9-989f-fb3a1dc2c0e9
> ceph-c9e00bac-501b-4449-b23c-d914c61a6838 -wi-ao----  <6.99t
>
>   osd-block-89390a0d-6bc3-41a5-a79b-ebac922d540f
> ceph-d8a23938-be11-4cee-ba13-951996ca9328 -wi-ao----  <6.99t
>
>   osd-block-cfabfe08-5e42-46ba-8412-1d264e9312d3
> ceph-dcff97c0-5caa-4cd2-87f5-baa552b0b172 -wi-ao----  <6.99t
>
>   osd-block-2e86866b-b537-4b29-86e7-490e627f82be
> ceph-de7ee41e-a121-4ecd-a838-cf284e811b7e -wi-ao----  18.19t
>
>   osd-block-ab0b170e-91e0-41d4-a5ea-b5c4fef6c3d8
> ceph-e6d22c3b-bfb6-494c-b09d-15a667816442 -wi-ao----  <6.99t
>
>   osd-db-012f9187-1596-4db8-96ba-809a96bce8fd
>  ceph-f0e77cf5-9878-4a9b-bdc4-22211aadb951 -wi-ao---- 248.39g
>
>   osd-db-124f39e2-f28d-47a8-acce-fb1364f7a638
>  ceph-f0e77cf5-9878-4a9b-bdc4-22211aadb951 -wi-ao---- 248.39g
>
>   osd-db-17edf308-b36b-42be-8855-55833d35f983
>  ceph-f0e77cf5-9878-4a9b-bdc4-22211aadb951 -wi-ao---- 248.39g
>
>   osd-db-55eb88b3-89a0-4931-a1be-d73e2aaf527f
>  ceph-f0e77cf5-9878-4a9b-bdc4-22211aadb951 -wi-ao---- 248.39g
>
>   osd-db-5e03ba4b-5641-49c6-a477-1b37dffcd398
>  ceph-f0e77cf5-9878-4a9b-bdc4-22211aadb951 -wi-ao---- 248.39g
>
>   osd-db-b22a1142-8dec-4035-96b1-7717c4587bcb
>  ceph-f0e77cf5-9878-4a9b-bdc4-22211aadb951 -wi-ao---- 248.39g
>
>   osd-block-f9c81b89-a903-4a11-9c2a-9171b4a5760a
> ceph-f8276850-dfb2-4689-a5b1-e54925605dc4 -wi-ao----  <6.99t
>
>
> VGS output ( ( greped for lsblk /dev/sdaa result and couldn't find it )
>
> VG                                        #PV #LV #SN Attr   VSize  VFree
>   ceph-013dc6ae-90c7-4271-b308-895bbafe8840   1   1   0 wz--n- <6.99t    0
>   ceph-27dfe22b-8316-40e3-aeee-1686bf0a95b7   1   1   0 wz--n- <6.99t    0
>   ceph-2de80ef9-8095-44be-9df2-91f458b1f1ea   1   1   0 wz--n- 18.19t    0
>   ceph-57f7e6bf-cf30-4d81-8c97-d0cb503cb8f8   1   1   0 wz--n- 13.97t    0
>   ceph-5815ca32-056b-476a-8a16-64c06b9c3fe9   1   1   0 wz--n- <6.99t    0
>   ceph-58a9cfda-5090-47f5-bfe8-b993d6133218   1   1   0 wz--n- <6.99t    0
>   ceph-59b5ad1b-a35b-4c46-b737-7328eee6ec45   1   1   0 wz--n- 13.97t    0
>   ceph-6aedba8c-3c5a-476d-9e0c-9fd159c8c09e   1   1   0 wz--n- <6.99t    0
>   ceph-6f7e544b-9ead-40c4-82ce-646272461b36   1   1   0 wz--n- <6.99t    0
>   ceph-7060eb95-cfb9-441b-9094-1803348ca3f2   1   1   0 wz--n- 18.19t    0
>   ceph-9729cc29-705c-42e5-bcc6-5d7cfd62a4ac   1   1   0 wz--n- 18.19t    0
>   ceph-9d628846-1c8f-4c73-9c7f-6e34c69210a3   1   1   0 wz--n- 18.19t    0
>   ceph-ab1ca6f2-30c8-45a7-bfec-4e09089ec856   1   1   0 wz--n- <6.99t    0
>   ceph-ba4098bf-6808-4a65-93a6-e84f3ff7936a   1   1   0 wz--n- 13.97t    0
>   ceph-c93d909a-b944-4dc5-a9b2-ed3441226c46   1   1   0 wz--n- 18.19t    0
>   ceph-c9e00bac-501b-4449-b23c-d914c61a6838   1   1   0 wz--n- <6.99t    0
>   ceph-d8a23938-be11-4cee-ba13-951996ca9328   1   1   0 wz--n- <6.99t    0
>   ceph-dcff97c0-5caa-4cd2-87f5-baa552b0b172   1   1   0 wz--n- <6.99t    0
>   ceph-de7ee41e-a121-4ecd-a838-cf284e811b7e   1   1   0 wz--n- 18.19t    0
>   ceph-e6d22c3b-bfb6-494c-b09d-15a667816442   1   1   0 wz--n- <6.99t    0
>   ceph-f0e77cf5-9878-4a9b-bdc4-22211aadb951   1   6   0 wz--n- <1.46t 8.00m
>   ceph-f8276850-dfb2-4689-a5b1-e54925605dc4   1   1   0 wz--n- <6.99t    0
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux