Re: [PATCH v2] ceph: Fix multifs mds auth caps issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2025-08-01 at 23:09 +0530, khiremat@xxxxxxxxxx wrote:
> From: Kotresh HR <khiremat@xxxxxxxxxx>
> 
> The mds auth caps check should also validate the
> fsname along with the associated caps. Not doing
> so would result in applying the mds auth caps of
> one fs on to the other fs in a multifs ceph cluster.
> The bug causes multiple issues w.r.t user
> authentication, following is one such example.
> 
> Steps to Reproduce (on vstart cluster):
> 1. Create two file systems in a cluster, say 'fsname1' and 'fsname2'
> 2. Authorize read only permission to the user 'client.usr' on fs 'fsname1'
>     $ceph fs authorize fsname1 client.usr / r
> 3. Authorize read and write permission to the same user 'client.usr' on fs 'fsname2'
>     $ceph fs authorize fsname2 client.usr / rw
> 4. Update the keyring
>     $ceph auth get client.usr >> ./keyring
> 
> With above permssions for the user 'client.usr', following is the
> expectation.
>   a. The 'client.usr' should be able to only read the contents
>      and not allowed to create or delete files on file system 'fsname1'.
>   b. The 'client.usr' should be able to read/write on file system 'fsname2'.
> 
> But, with this bug, the 'client.usr' is allowed to read/write on file
> system 'fsname1'. See below.
> 
> 5. Mount the file system 'fsname1' with the user 'client.usr'
>      $sudo bin/mount.ceph usr@.fsname1=/ /kmnt_fsname1_usr/
> 6. Try creating a file on file system 'fsname1' with user 'client.usr'. This
>    should fail but passes with this bug.
>      $touch /kmnt_fsname1_usr/file1
> 7. Mount the file system 'fsname1' with the user 'client.admin' and create a
>    file.
>      $sudo bin/mount.ceph admin@.fsname1=/ /kmnt_fsname1_admin
>      $echo "data" > /kmnt_fsname1_admin/admin_file1
> 8. Try removing an existing file on file system 'fsname1' with the user
>    'client.usr'. This shoudn't succeed but succeeds with the bug.
>      $rm -f /kmnt_fsname1_usr/admin_file1
> 
> For more information, please take a look at the corresponding mds/fuse patch
> and tests added by looking into the tracker mentioned below.
> 
> URL: https://tracker.ceph.com/issues/72167  
> Signed-off-by: Kotresh HR <khiremat@xxxxxxxxxx>
> ---
>  fs/ceph/mds_client.c | 10 ++++++++++
>  fs/ceph/mdsmap.c     | 11 ++++++++++-
>  fs/ceph/mdsmap.h     |  1 +
>  3 files changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index 9eed6d73a508..8472ae7b7f3d 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -5640,11 +5640,21 @@ static int ceph_mds_auth_match(struct ceph_mds_client *mdsc,
>  	u32 caller_uid = from_kuid(&init_user_ns, cred->fsuid);
>  	u32 caller_gid = from_kgid(&init_user_ns, cred->fsgid);
>  	struct ceph_client *cl = mdsc->fsc->client;
> +	const char *fs_name = mdsc->mdsmap->fs_name;
>  	const char *spath = mdsc->fsc->mount_options->server_path;
>  	bool gid_matched = false;
>  	u32 gid, tlen, len;
>  	int i, j;
>  
> +	if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) {
> +		doutc(cl, "fsname check failed fs_name=%s  match.fs_name=%s\n",
> +		      fs_name, auth->match.fs_name);

If fsname check failed, then why it is not error message?

> +		return 0;
> +	} else {
> +		doutc(cl, "fsname check passed fs_name=%s  match.fs_name=%s\n",
> +		      fs_name, auth->match.fs_name ? auth->match.fs_name : "");

Maybe, we could have one doutc message before the check. Why do we have two
ones?

> +	}
> +
>  	doutc(cl, "match.uid %lld\n", auth->match.uid);
>  	if (auth->match.uid != MDS_AUTH_UID_ANY) {
>  		if (auth->match.uid != caller_uid)
> diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c
> index 8109aba66e02..f1431ba0b33e 100644
> --- a/fs/ceph/mdsmap.c
> +++ b/fs/ceph/mdsmap.c
> @@ -356,7 +356,15 @@ struct ceph_mdsmap *ceph_mdsmap_decode(struct ceph_mds_client *mdsc, void **p,
>  		/* enabled */
>  		ceph_decode_8_safe(p, end, m->m_enabled, bad_ext);
>  		/* fs_name */
> -		ceph_decode_skip_string(p, end, bad_ext);
> +	        m->fs_name = ceph_extract_encoded_string(p, end, NULL, GFP_NOFS);
> +	        if (IS_ERR(m->fs_name)) {
> +			err = PTR_ERR(m->fs_name);
> +			m->fs_name = NULL;
> +			if (err == -ENOMEM)
> +				goto out_err;
> +			else
> +				goto bad;
> +	        }
>  	}
>  	/* damaged */
>  	if (mdsmap_ev >= 9) {
> @@ -418,6 +426,7 @@ void ceph_mdsmap_destroy(struct ceph_mdsmap *m)
>  		kfree(m->m_info);
>  	}
>  	kfree(m->m_data_pg_pools);
> +	kfree(m->fs_name);
>  	kfree(m);
>  }
>  
> diff --git a/fs/ceph/mdsmap.h b/fs/ceph/mdsmap.h
> index 1f2171dd01bf..acb0a2a3627a 100644
> --- a/fs/ceph/mdsmap.h
> +++ b/fs/ceph/mdsmap.h
> @@ -45,6 +45,7 @@ struct ceph_mdsmap {
>  	bool m_enabled;
>  	bool m_damaged;
>  	int m_num_laggy;
> +	char *fs_name;

Let's finish discussion related to fs_name. I've shared my questions in previous
email threads. Even if we will keep the fs_name here, then, it should be in the
beginning of the structure. But I am still not convinced that we should have it
here.

Thanks,
Slava.

>  };
>  
>  static inline struct ceph_entity_addr *





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux