Re: v20.1.0 Tentacle RC0 released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bootstrapping a new cluster fails as well with the same trace:

/usr/bin/ceph: stderr ImportError: cannot import name 'TypedDict'


...
Adding key to root@localhost authorized_keys...
Adding host soc9-ceph...
Non-zero exit code 22 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v20.1.0 -e NODE_NAME=soc9-ceph -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/dba035cc-8db1-11f0-9bf6-fa163e2ad8c5:/var/log/ceph:z -v /tmp/ceph-tmp9fytdo0h:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp5n_50w5k:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v20.1.0 orch host add soc9-ceph 192.168.124.186
/usr/bin/ceph: stderr Error EINVAL: check-host failed:
/usr/bin/ceph: stderr Traceback (most recent call last):
/usr/bin/ceph: stderr File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
/usr/bin/ceph: stderr     "__main__", mod_spec)
/usr/bin/ceph: stderr File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
/usr/bin/ceph: stderr     exec(code, run_globals)
/usr/bin/ceph: stderr File "/var/lib/ceph/dba035cc-8db1-11f0-9bf6-fa163e2ad8c5/cephadm.07ba6d1a818cf6ded98e09fc882a9f4f1772aeb82d4664369096243131efe94f/__main__.py", line 190, in <module> /usr/bin/ceph: stderr File "/var/lib/ceph/dba035cc-8db1-11f0-9bf6-fa163e2ad8c5/cephadm.07ba6d1a818cf6ded98e09fc882a9f4f1772aeb82d4664369096243131efe94f/cephadmlib/listing.py", line 53, in <module>
/usr/bin/ceph: stderr ImportError: cannot import name 'TypedDict'
ERROR: Failed to add host <soc9-ceph>: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v20.1.0 -e NODE_NAME=soc9-ceph -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/dba035cc-8db1-11f0-9bf6-fa163e2ad8c5:/var/log/ceph:z -v /tmp/ceph-tmp9fytdo0h:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp5n_50w5k:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v20.1.0 orch host add soc9-ceph 192.168.124.186

Zitat von Eugen Block <eblock@xxxxxx>:

Hi,

thanks for the info, I'm excited to test the new release canidate!

And I already got my first issue, trying to upgrade a tiny single-node cluster from 19.2.3 to 20.1.0 fails (health detail at the end). The host is a VM running openSUSE Leap 15.6. The first MGR seems to have been upgraded successfully:

soc9-ceph:~ # ceph versions -f json | jq -r '.mgr'
{
"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)": 1, "ceph version 20.1.0 (010a3ad647c9962d47812a66ad6feda26ab28aa4) tentacle (rc - RelWithDebInfo)": 1
}


Is this already a known issue?

Thanks!
Eugen

# ceph orch upgrade status
{
    "in_progress": true,
"target_image": "quay.io/ceph/ceph@sha256:06b298a25e7cee11677f06a54ad90bb69f9b295e0d5482663f26b439d14d4045",
    "services_complete": [],
    "which": "Upgrading all daemon types on all hosts",
    "progress": "1/10 daemons upgraded",
"message": "Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target image",
    "is_paused": true
}


soc9-ceph:~ # ceph health detail
HEALTH_WARN failed to probe daemons or devices; Upgrade: failed to pull target image
[WRN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
host soc9-ceph `cephadm ls` failed: cephadm exited with an error code: 1, stderr: Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
File "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6ded98e09fc882a9f4f1772aeb82d4664369096243131efe94f/__main__.py", line 190, in <module> File "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6ded98e09fc882a9f4f1772aeb82d4664369096243131efe94f/cephadmlib/listing.py", line 53, in <module>
ImportError: cannot import name 'TypedDict'
host soc9-ceph `cephadm gather-facts` failed: cephadm exited with an error code: 1, stderr: Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
File "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6ded98e09fc882a9f4f1772aeb82d4664369096243131efe94f/__main__.py", line 190, in <module> File "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6ded98e09fc882a9f4f1772aeb82d4664369096243131efe94f/cephadmlib/listing.py", line 53, in <module>
ImportError: cannot import name 'TypedDict'
host soc9-ceph `cephadm list-networks` failed: cephadm exited with an error code: 1, stderr: Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
File "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6ded98e09fc882a9f4f1772aeb82d4664369096243131efe94f/__main__.py", line 190, in <module> File "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6ded98e09fc882a9f4f1772aeb82d4664369096243131efe94f/cephadmlib/listing.py", line 53, in <module>
ImportError: cannot import name 'TypedDict'
[WRN] UPGRADE_FAILED_PULL: Upgrade: failed to pull target image
failed to pull quay.io/ceph/ceph@sha256:06b298a25e7cee11677f06a54ad90bb69f9b295e0d5482663f26b439d14d4045 on host soc9-ceph

Zitat von Yuri Weinstein <yweinste@xxxxxxxxxx>:

This is the first release candidate for Tentacle.
Ubuntu 22.04, 24.04 and CentOS 9 were built for this RC0

Feature highlights:

* RADOS: New features include long expected performance optimizations
(FastEC)
 for EC pools, including partial reads and partial writes.
 Users can also expect to see BlueStore improvements such as compression
and a new,
 faster WAL (write-ahead-log). Additional improvements include switching
all components
 to the faster OMAP iteration interface, bypassable ceph_assert()s,
 fixed mclock bugs and configuration defaults, and testing improvements
for
 dencoding verification.
* MGR: Highlights include the ability to force-disable always-on modules
and
 the removal of the restful and zabbix modules (both deprecated since
2020).
 Note that the dashboard module's richer and better-maintained RESTful API
can be used
 as an alternative to the restful module, and the prometheus module can be
used
 as an alternative monitoring solution for zabbix.
* RGW: Multiple fixes: Lua scripts will not run against health checks,
 properly quoted ETag values returned by S3 CopyPart, PostObject and
 CompleteMultipartUpload responses.
* RGW: IAM policy evaluation now supports conditions ArnEquals and ArnLike,
 along with their Not and IfExists variants.
* RBD: New live migration features: RBD images can now be instantly
 imported from another Ceph cluster (native format) or from a wide
 variety of external sources/formats with the help of the new NBD
 stream and an appropriately capable NBD server such as `qemu-nbd`.
 Also added support for RBD namespace remapping while mirroring
 between Ceph clusters, new `rbd group info` and `rbd group snap info`
 commands and enhanced `rbd group snap ls` command.  `rbd device map`
 command now defaults to msgr2.
* CephFS: Directories may now be configured with case-insensitive or
 normalized directory entry names. This is an inheritable configuration
making
 it apply to an entire directory tree. For more information, see
 https://docs.ceph.com/en/latest/cephfs/charmap/
* CephFS: Modifying the FS setting variable "max_mds" when a cluster is
 unhealthy now requires users to pass the confirmation flag
 (--yes-i-really-mean-it). This has been added as a precaution to tell the
 users that modifying "max_mds" may not help with troubleshooting or
recovery
 effort. Instead, it might further destabilize the cluster.
* CephFS: EOPNOTSUPP (Operation not supported ) is now returned by the
CephFS
 fuse client for `fallocate` for the default case (i.e. mode == 0) since
 CephFS does not support disk space reservation. The only flags supported
are
 `FALLOC_FL_KEEP_SIZE` and `FALLOC_FL_PUNCH_HOLE`.
* Dashboard: Added support for NVMe/TCP (gateway groups, multiple
namespaces),
 multi-cluster management, oAuth2 integration, and enhanced RGW/SMB
features
 including multi-site automation, tiering, policies, lifecycles,
 notifications, and granular replication.


* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-20.1.0.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: 010a3ad647c9962d47812a66ad6feda26ab28aa4
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux