Putting an OSD host with NFS daemons in maintenance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

When trying to put one of our OSD hosts in maintenance, "ceph orch host maintenance enter” displays:
WARNING: Removing NFS daemons can cause clients to lose connectivity.

We have a single Cephfs filesystem in our cluster.  Five hosts are “admin” hosts that run cephadm, mds, mgr, mon, etc.  The rests of our hosts are OSDs with the spinning disks that make up our Cephfs data pool.

Our current setup follows the "HIGH-AVAILABILITY NFS” documentation, which gives us an Ingress.nfs.cephfs service with the haproxy and keepalived daemons and a nfs.cephfs service for the actual nfs daemons.  This service was deployed using:
ceph nfs cluster create cephfs "label:_admin" --ingress --virtual_ip virtual_ip

And then we manually updated the nfs.cephfs service this created to place the nfs daemons on our OSD nodes.

This gives us the following:
———
service_type: ingress
service_id: nfs.cephfs
service_name: ingress.nfs.cephfs
placement:
 label: _admin
spec:
 backend_service: nfs.cephfs
 first_virtual_router_id: 50
 frontend_port: 2049
 monitor_port: 9049
 virtual_ip: virtual_ip/prefix

service_type: nfs
service_id: cephfs
service_name: nfs.cephfs
placement:
 label: osd
spec:
 port: 12049
———

How can we safely put one of our osd nodes in maintenance?  Or is there a better way to organize our daemons or setup and manage our NFS service that would avoid this issue?

Many thanks,
Devin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux