Re: [PATCH v2] ceph: cleanup in ceph_alloc_readdir_reply_buffer()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Reviewed-by: Alex Markuze <amarkuze@xxxxxxxxxx>

On Tue, Sep 2, 2025 at 10:09 PM Viacheslav Dubeyko <slava@xxxxxxxxxxx> wrote:
>
> From: Viacheslav Dubeyko <Slava.Dubeyko@xxxxxxx>
>
> The Coverity Scan service has reported potential issue
> in ceph_alloc_readdir_reply_buffer() [1]. If order could
> be negative one, then it expects the issue in the logic:
>
> num_entries = (PAGE_SIZE << order) / size;
>
> Technically speaking, this logic [2] should prevent from
> making the order variable negative:
>
> if (!rinfo->dir_entries)
>     return -ENOMEM;
>
> However, the allocation logic requires some cleanup.
> This patch makes sure that calculated bytes count
> will never exceed ULONG_MAX before get_order()
> calculation. And it adds the checking of order
> variable on negative value to guarantee that second
> half of the function's code will never operate by
> negative value of order variable even if something
> will be wrong or to be changed in the first half of
> the function's logic.
>
> v2
> Alex Markuze suggested to add unlikely() macro
> for introduced condition checks.
>
> [1] https://scan5.scan.coverity.com/#/project-view/64304/10063?selectedIssue=1198252
> [2] https://elixir.bootlin.com/linux/v6.17-rc3/source/fs/ceph/mds_client.c#L2553
>
> Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@xxxxxxx>
> cc: Alex Markuze <amarkuze@xxxxxxxxxx>
> cc: Ilya Dryomov <idryomov@xxxxxxxxx>
> cc: Ceph Development <ceph-devel@xxxxxxxxxxxxxxx>
> ---
>  fs/ceph/mds_client.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index 0f497c39ff82..992987801753 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -2532,6 +2532,7 @@ int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req,
>         struct ceph_mount_options *opt = req->r_mdsc->fsc->mount_options;
>         size_t size = sizeof(struct ceph_mds_reply_dir_entry);
>         unsigned int num_entries;
> +       u64 bytes_count;
>         int order;
>
>         spin_lock(&ci->i_ceph_lock);
> @@ -2540,7 +2541,11 @@ int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req,
>         num_entries = max(num_entries, 1U);
>         num_entries = min(num_entries, opt->max_readdir);
>
> -       order = get_order(size * num_entries);
> +       bytes_count = (u64)size * num_entries;
> +       if (unlikely(bytes_count > ULONG_MAX))
> +               bytes_count = ULONG_MAX;
> +
> +       order = get_order((unsigned long)bytes_count);
>         while (order >= 0) {
>                 rinfo->dir_entries = (void*)__get_free_pages(GFP_KERNEL |
>                                                              __GFP_NOWARN |
> @@ -2550,7 +2555,7 @@ int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req,
>                         break;
>                 order--;
>         }
> -       if (!rinfo->dir_entries)
> +       if (!rinfo->dir_entries || unlikely(order < 0))
>                 return -ENOMEM;
>
>         num_entries = (PAGE_SIZE << order) / size;
> --
> 2.51.0
>






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux