Re: Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected on all newly added osd disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you clarify a bit more? Are you surprised that there are already OSDs deployed although you just added the new (blank) disks? In that case you might have already an osd service in place which automatically deploys OSDs as soon as available devices are added. To confirm that, please add the output of:

ceph orch ls osd --export

Zitat von gagan tiwari <gagan.tiwari@xxxxxxxxxxxxxxxxxx>:

No. They are new disks. Not used before. I am setting up a new cluster.

I did run this command to zap "ceph orch device zap od-node1 /dev/sda
--force"

Here is lsblk output.

 lsblk
NAME
MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda
 8:0    0   14T  0 disk
└─ceph--54ce8bd1--384b--4e94--a9ed--b231ff649bd2-osd--block--ace35922--b31d--49db--ad86--839acca0990c

 253:0    0   14T  0 lvm
sdb
 8:16   0   14T  0 disk
└─ceph--ce627d0c--5a73--4df4--9690--752fa20d614e-osd--block--bbe3ce9e--ae82--453b--bca8--191f2300f780

 253:1    0   14T  0 lvm
sdc
 8:32   0   14T  0 disk
└─ceph--543e9cff--9447--4a3d--a6e8--2c3dd7e1810b-osd--block--a013c7f6--aa1b--4c96--be71--9fa3bf2a910c

 253:2    0   14T  0 lvm
sdd
 8:48   0   14T  0 disk
└─ceph--d3e34042--0a3d--476d--acdb--24f48d7c59e1-osd--block--c77a03ea--9280--46b4--8ff4--2e11ad25eddc

 253:4    0   14T  0 lvm
sde
 8:64   0   14T  0 disk
└─ceph--d3e7308a--09a4--4e91--8d12--8f3b1cc94c6f-osd--block--1f83e5df--4e6c--4613--b8dd--5fea7fdcec4d

 253:3    0   14T  0 lvm
sdf
 8:80   0   14T  0 disk
└─ceph--6a86dae1--f879--414a--8115--03fbef2f36e4-osd--block--debb0534--a12d--48e1--8d4c--d0c07e44ae6c

 253:5    0   14T  0 lvm
sdg
 8:96   0   14T  0 disk
└─ceph--f52542ec--0aa7--4007--8d42--f81723e946e3-osd--block--07837bc6--d868--48b5--b950--9cef1ce54eb4

 253:6    0   14T  0 lvm
sdh
 8:112  0   14T  0 disk



Here is fdisk output :

 fdisk -l /dev/sda
Disk /dev/sda: 13.97 TiB, 15360950534144 bytes, 30001856512 sectors
Disk model: WUS5EA1A1ESP5E3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


fdisk -l /dev/sdb
Disk /dev/sdb: 13.97 TiB, 15360950534144 bytes, 30001856512 sectors
Disk model: WUS5EA1A1ESP5E3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Thanks,
Gagan

On Thu, Jul 31, 2025 at 9:21 PM Anthony D'Atri <anthony.datri@xxxxxxxxx>
wrote:

How did you zap the new drives?  Had they been used before?

Does `lsblk` show anything on the drives?  What about fdisk?

# fdisk -l /dev/sda
Disk /dev/sda: 744.63 GiB, 799535005696 bytes, 1561591808 sectors
Disk model: PERC H730 Mini
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: A00D586C-45F0-49BE-8290-A2C8B872145F

Device     Start        End    Sectors   Size Type
/dev/sda1   2048       4095       2048     1M BIOS boot
#
# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0 744.6G  0 disk
├─sda1   8:1    0     1M  0 part
└─sda2   8:2    0 744.6G  0 part /



> On Jul 31, 2025, at 11:48 AM, gagan tiwari <
gagan.tiwari@xxxxxxxxxxxxxxxxxx> wrote:
>
> Hi Guys,
>                   I am setting up a new ceph cluster. After adding osd
> devices when I did run *ceph orch device ls*
>
> It shows * Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM
> detected  for all osd devices.*
>
> All the OSD devices are new nvme disks.
>
> I tried to rm , zap , destroy disks and re-added them but again same
> messages for all disks.
>
> Please let me know how to fix this.
>
> Thanks,
> Gagan
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux