How did you zap the new drives? Had they been used before? Does `lsblk` show anything on the drives? What about fdisk? # fdisk -l /dev/sda Disk /dev/sda: 744.63 GiB, 799535005696 bytes, 1561591808 sectors Disk model: PERC H730 Mini Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A00D586C-45F0-49BE-8290-A2C8B872145F Device Start End Sectors Size Type /dev/sda1 2048 4095 2048 1M BIOS boot # # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 744.6G 0 disk ├─sda1 8:1 0 1M 0 part └─sda2 8:2 0 744.6G 0 part / > On Jul 31, 2025, at 11:48 AM, gagan tiwari <gagan.tiwari@xxxxxxxxxxxxxxxxxx> wrote: > > Hi Guys, > I am setting up a new ceph cluster. After adding osd > devices when I did run *ceph orch device ls* > > It shows * Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM > detected for all osd devices.* > > All the OSD devices are new nvme disks. > > I tried to rm , zap , destroy disks and re-added them but again same > messages for all disks. > > Please let me know how to fix this. > > Thanks, > Gagan > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx