Re: DriveGroup Spec question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Am 8/7/25 um 16:41 schrieb Anthony D'Atri:
Which Ceph release?

19.2.3


block_db_size and db_slots are meant to be mutually exclusive.  The former specifies the size of a slice; the latter says to divide up the device into N even slices.
That said, it's not entirely clear that db_slots always works as expected.  I would leave it out.

db_slots has absolutely no effect on the outcome.

What manner of SSD is 605GB?

This will be a namespace on a larger NVMe where the rest would be used as OSD. Currently I am testing this on VMs. This is why we can allocate exactly 6* 100GB to that block device.

Initially this creates 6 OSDs with their RocksDB+WAL on the SSDs,
3 each which is nice for load balancing.

But when we add another HDD it gets a 17.9TB data volume and a 100GB DB

Is it the same HDD SKU as existing OSDs?  You could be suffering from this new HDD being ever so slightly smaller than the existing HDDs, and from base 2 units (TiB) vs base 10 (TB). Which is likely the same reason you have a size range for the DB device spec.

I do not think that this is the issue here but I'll try.


Regards
--
Robert Sander
Linux Consultant

Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: +49 30 405051 - 0
Fax: +49 30 405051 - 19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux