Re: squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



sorry, I meant today " The service is now deployed on all host" :-)

On Tue, 2 Sept 2025 at 12:26, Steven Vacaroaia <stef97@xxxxxxxxx> wrote:

> Hi,
>
> Thanks
>
> The service is no deployed on all host
>
> This community help is  AMAZING  !!!
>
> Steven
>
> On Tue, 2 Sept 2025 at 12:20, Eugen Block <eblock@xxxxxx> wrote:
>
>> If the OSDs already exist, there’s nothing the orchestrator would do,
>> that’s why you don’t see anything in the dry run output. You can apply
>> your change and when one of the hosts will get a new disk or you wipe
>> one, the spec will pick that up.
>>
>>
>> Zitat von Steven Vacaroaia <stef97@xxxxxxxxx>:
>>
>> > Hi,
>> >
>> > I am using cephadm and deployed all my OSDs using a spec file
>> >
>> >  I just noticed that osd_ssds has only one placement host instead of
>> all 7
>> >
>> > How do I add the other placement hosts ?
>> >
>> > Re running spec file with "hosts = *" and --dry-run does not indicated
>> it
>> > will do anything
>> >
>> >  Many thanks
>> > Steven
>> >
>> > This is the spec file
>> >
>> > # SSD OSDs
>> > service_type: osd
>> > service_id: ssd_osds
>> > placement:
>> >   host_pattern: "*"
>> > crush_device_class: ssd_class
>> > spec:
>> >   data_devices:
>> >     rotational: 0
>> >     size: '6T:7T'
>> >
>> > The relevant part of ceph orch ls
>> > osd.all-available-devices                                     4  8m ago
>> > 16h  *
>> >
>> > osd.hdd_osds                                                 74  8m ago
>> > 16h  *
>> >
>> > osd.nvme_osds                                                25  8m ago
>> > 5w   *
>> >
>> > osd.ssd_osds                                                 84  8m ago
>> > 3w   ceph-host-1
>> >
>> > The exported part of osd.ssd_osds
>> > ---
>> > service_type: osd
>> > service_id: ssd_osds
>> > service_name: osd.ssd_osds
>> > placement:
>> >   hosts:
>> >   - ceph-host-1
>> > spec:
>> >   crush_device_class: ssd_class
>> >   data_devices:
>> >     rotational: 0
>> >     size: 6T:7T
>> >   filter_logic: AND
>> >   objectstore: bluestore
>> >
>> > The result of applying the osd-ssd-config.yml ( with --dry-run)
>> >
>> > WARNING! Dry-Runs are snapshots of a certain point in time and are bound
>> > to the current inventory setup. If any of these conditions change, the
>> > preview will be invalid. Please make sure to have a minimal
>> > timeframe between planning and applying the specs.
>> > ####################
>> > SERVICESPEC PREVIEWS
>> > ####################
>> > +---------+------+--------+-------------+
>> > |SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
>> > +---------+------+--------+-------------+
>> > +---------+------+--------+-------------+
>> > ################
>> > OSDSPEC PREVIEWS
>> > ################
>> > +---------+------+------+------+----+-----+
>> > |SERVICE  |NAME  |HOST  |DATA  |DB  |WAL  |
>> > +---------+------+------+------+----+-----+
>> > +---------+------+------+------+----+-----+
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux