I haven’t attempted the remaining upgrade just yet. I wanted to check on this before proceeding. Things seem “stable” in the sense that I’m running VMs and all volumes and images are still functioning. I’m using whatever would have been the default from 16.2.14. It seems to be from time to time because I receive nagios alerts, which seem to eventually clear and then reappear. HEALTH_WARN Failed to apply 1 service(s): osd.cost_capacity [WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.cost_capacity osd.cost_capacity: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/95f49c1c-b1e8-11ee-b5d0-0cc47a8f35c1/mon.cn02/config Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:47de8754d1f72fadb61523247c897fdf673f9a9689503c64ca8384472d232c5c -e NODE_NAME=cn02.ceph.xyz.corp -e CEPH_VOLUME_OSDSPEC_AFFINITY=cost_capacity -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/95f49c1c-b1e8-11ee-b5d0-0cc47a8f35c1:/var/run/ceph:z -v /var/log/ceph/95f49c1c-b1e8-11ee-b5d0-0cc47a8f35c1:/var/log/ceph:z -v /var/lib/ceph/95f49c1c-b1e8-11ee-b5d0-0cc47a8f35c1/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmp49jj8zoh:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp_9k8v5uj:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:47de8754d1f72fadb61523247c897fdf673f9a9689503c64ca8384472d232c5c lvm batch --no-auto /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf --dmcrypt --yes --no-systemd /usr/bin/podman: stderr Traceback (most recent call last): /usr/bin/podman: stderr File "/usr/sbin/ceph-volume", line 33, in <module> /usr/bin/podman: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()) /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 54, in __init__ /usr/bin/podman: stderr self.main(self.argv) /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 59, in newfunc /usr/bin/podman: stderr return f(*a, **kw) /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 166, in main /usr/bin/podman: stderr terminal.dispatch(self.mapper, subcommand_args) /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch /usr/bin/podman: stderr instance.main() /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main /usr/bin/podman: stderr terminal.dispatch(self.mapper, self.argv) /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 192, in dispatch /usr/bin/podman: stderr instance = mapper.get(arg)(argv[count:]) /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in __init__ /usr/bin/podman: stderr self.args = parser.parse_args(argv) /usr/bin/podman: stderr File "/usr/lib64/python3.9/argparse.py", line 1825, in parse_args /usr/bin/podman: stderr args, argv = self.parse_known_args(args, namespace) /usr/bin/podman: stderr File "/usr/lib64/python3.9/argparse.py", line 1858, in parse_known_args /usr/bin/podman: stderr namespace, args = self._parse_known_args(args, namespace) /usr/bin/podman: stderr File "/usr/lib64/python3.9/argparse.py", line 2067, in _parse_known_args /usr/bin/podman: stderr start_index = consume_optional(start_index) /usr/bin/podman: stderr File "/usr/lib64/python3.9/argparse.py", line 2007, in consume_optional /usr/bin/podman: stderr take_action(action, args, option_string) /usr/bin/podman: stderr File "/usr/lib64/python3.9/argparse.py", line 1935, in take_action /usr/bin/podman: stderr action(self, namespace, argument_values, option_string) /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py", line 17, in __call__ /usr/bin/podman: stderr set_dmcrypt_no_workqueue() /usr/bin/podman: stderr File "/usr/lib/python3.9/site-packages/ceph_volume/util/encryption.py", line 54, in set_dmcrypt_no_workqueue /usr/bin/podman: stderr raise RuntimeError('Error while checking cryptsetup version.\n', /usr/bin/podman: stderr RuntimeError: ('Error while checking cryptsetup version.\n', '`cryptsetup --version` output:\n', 'cryptsetup 2.7.2 flags: UDEV BLKID KEYRING FIPS KERNEL_CAPI PWQUALITY ') Traceback (most recent call last): File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/tmp/tmpedb1_faj.cephadm.build/__main__.py", line 11009, in <module> File "/tmp/tmpedb1_faj.cephadm.build/__main__.py", line 10997, in main File "/tmp/tmpedb1_faj.cephadm.build/__main__.py", line 2593, in _infer_config File "/tmp/tmpedb1_faj.cephadm.build/__main__.py", line 2509, in _infer_fsid File "/tmp/tmpedb1_faj.cephadm.build/__main__.py", line 2621, in _infer_image File "/tmp/tmpedb1_faj.cephadm.build/__main__.py", line 2496, in _validate_fsid File "/tmp/tmpedb1_faj.cephadm.build/__main__.py", line 7226, in command_ceph_volume File "/tmp/tmpedb1_faj.cephadm.build/__main__.py", line 2284, in call_throws RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:47de8754d1f72fadb61523247c897fdf673f9a9689503c64ca8384472d232c5c -e NODE_NAME=cn02.ceph.xyz.corp -e CEPH_VOLUME_OSDSPEC_AFFINITY=cost_capacity -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/95f49c1c-b1e8-11ee-b5d0-0cc47a8f35c1:/var/run/ceph:z -v /var/log/ceph/95f49c1c-b1e8-11ee-b5d0-0cc47a8f35c1:/var/log/ceph:z -v /var/lib/ceph/95f49c1c-b1e8-11ee-b5d0-0cc47a8f35c1/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmp49jj8zoh:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp_9k8v5uj:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:47de8754d1f72fadb61523247c897fdf673f9a9689503c64ca8384472d232c5c lvm batch --no-auto /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf --dmcrypt --yes --no-systemd — ceph orch ls osd --export service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: host_pattern: '*' spec: data_devices: all: true filter_logic: AND objectstore: bluestore --- service_type: osd service_id: cost_capacity service_name: osd.cost_capacity placement: host_pattern: '*' spec: data_devices: rotational: 1 encrypted: true filter_logic: AND objectstore: bluestore Thank you -jeremy
|
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx