Re: FS not mount after update to quincy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can the client talk to the MDS on the port it listens on?

Den fre 11 apr. 2025 kl 08:59 skrev Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>:
>
>
>
> Hi guys Good morning,
>
>
> Since I performed the update to Quincy, I've noticed a problem that wasn't present with Octopus. Currently, our Ceph cluster exports a filesystem to certain nodes, which we use as a backup repository.
> The machines that mount this FS are currently running Ubuntu 24 with Ceph Squid as the client version.
>
> zeus22:~ # ls -la /cephvmsfs/
> total 225986576
> drwxrwxrwx 13 root         root                             17 Apr  4 13:10 .
> drwxr-xr-x      1 root         root                           286 Mar 19 13:27 ..
> -rw-r--r--         1 root         root         124998647808 Apr  4 13:18 arcceal9.img
> drwxrwxrwx  2 nobody    nogroup                        2 Jul 12  2018 backup
> drwxr-xr-x     2 nobody    nogroup                        1 Oct 18  2017 Default
> -rw-r--r--        1 root         root          214cat /etc74836480 Mar 26 18:11 ns1.img
> drwxr-xr-x     2 root         root                               1 Aug 29  2024 OnlyOffice
> Before the update, these nodes mounted the FS correctly (even cluster in octopus and clients in squid), and the nodes that haven't been restarted are still accessing it.
>
> One of these machines has been reinstalled, and using the same configuration as the nodes that are still mounting this FS, it is unable to mount, giving errors such as:
>
> `mount error: no mds (Metadata Server) is up. The cluster might be laggy, or you may not be authorized`
> 10.10.3.1:3300,10.10.3.2:3300,10.10.3.3:3300:/ /cephvmsfs ceph name=cephvmsfs,secretfile=/etc/ceph/cephvmsfs.secret,noatime,mds_namespace=cephvmsfs,_netdev 0 0
>
> If I change the port to use 6789 (v1)
>
>
> mount error 110 = Connection timed out
>
> ceph cluster is healty and msd are up
>
> cephmon01:~ # ceph -s
> cluster:
> id: 6f5a65a7-yyy-zzzz-xxxx-428608941dd1
> health: HEALTH_OK
>
> services:
> mon: 3 daemons, quorum cephmon01,cephmon03,cephmon02 (age 2d)
> mgr: cephmon02(active, since 7d), standbys: cephmon01, cephmon03
> mds: 1/1 daemons up, 1 standby
> osd: 231 osds: 231 up (since 7d), 231 in (since 9d)
> rgw: 2 daemons active (2 hosts, 1 zones)
>
>
>
> Cephmons are available from clients in both ports:
> zeus:~ # telnet cephmon02 6789
> Trying 10.10.3.2...
> Connected to cephmon02.
> Escape character is '^]'.
> ceph v027��
>
> Ҭ
>
> zeus01:~ # telnet cephmon02 3300
> Trying 10.10.3.2...
> Connected to cephmon02.
> Escape character is '^]'.
> ceph v2
>
>
> Any advise is welcomed, regards I
> --
>
> ================================================================
> Ibán Cabrillo Bartolomé
> Instituto de Física de Cantabria (IFCA-CSIC)
> Santander, Spain
> Tel: +34942200969/+34669930421
> Responsible for advanced computing service (RSC)
> =========================================================================================
> =========================================================================================
> All our suppliers must know and accept IFCA policy available at:
>
> https://confluence.ifca.es/display/IC/Information+Security+Policy+for+External+Suppliers
> ==========================================================================================
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux