Re: [PATCH] ceph: cleanup in ceph_alloc_readdir_reply_buffer()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2025-09-01 at 18:30 +0300, Alex Markuze wrote:
> Lets add unlikely to these checks.
> 

Makes sense. Let me rework the patch.

Thanks,
Slava.

> On Sat, Aug 30, 2025 at 12:29 AM Viacheslav Dubeyko <slava@xxxxxxxxxxx> wrote:
> > 
> > From: Viacheslav Dubeyko <Slava.Dubeyko@xxxxxxx>
> > 
> > The Coverity Scan service has reported potential issue
> > in ceph_alloc_readdir_reply_buffer() [1]. If order could
> > be negative one, then it expects the issue in the logic:
> > 
> > num_entries = (PAGE_SIZE << order) / size;
> > 
> > Technically speaking, this logic [2] should prevent from
> > making the order variable negative:
> > 
> > if (!rinfo->dir_entries)
> >     return -ENOMEM;
> > 
> > However, the allocation logic requires some cleanup.
> > This patch makes sure that calculated bytes count
> > will never exceed ULONG_MAX before get_order()
> > calculation. And it adds the checking of order
> > variable on negative value to guarantee that second
> > half of the function's code will never operate by
> > negative value of order variable even if something
> > will be wrong or to be changed in the first half of
> > the function's logic.
> > 
> > [1] https://scan5.scan.coverity.com/#/project-view/64304/10063?selectedIssue=1198252
> > [2] https://elixir.bootlin.com/linux/v6.17-rc3/source/fs/ceph/mds_client.c#L2553
> > 
> > Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@xxxxxxx>
> > cc: Alex Markuze <amarkuze@xxxxxxxxxx>
> > cc: Ilya Dryomov <idryomov@xxxxxxxxx>
> > cc: Ceph Development <ceph-devel@xxxxxxxxxxxxxxx>
> > ---
> >  fs/ceph/mds_client.c | 9 +++++++--
> >  1 file changed, 7 insertions(+), 2 deletions(-)
> > 
> > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > index 0f497c39ff82..d783326d6183 100644
> > --- a/fs/ceph/mds_client.c
> > +++ b/fs/ceph/mds_client.c
> > @@ -2532,6 +2532,7 @@ int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req,
> >         struct ceph_mount_options *opt = req->r_mdsc->fsc->mount_options;
> >         size_t size = sizeof(struct ceph_mds_reply_dir_entry);
> >         unsigned int num_entries;
> > +       u64 bytes_count;
> >         int order;
> > 
> >         spin_lock(&ci->i_ceph_lock);
> > @@ -2540,7 +2541,11 @@ int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req,
> >         num_entries = max(num_entries, 1U);
> >         num_entries = min(num_entries, opt->max_readdir);
> > 
> > -       order = get_order(size * num_entries);
> > +       bytes_count = (u64)size * num_entries;
> > +       if (bytes_count > ULONG_MAX)
> > +               bytes_count = ULONG_MAX;
> > +
> > +       order = get_order((unsigned long)bytes_count);
> >         while (order >= 0) {
> >                 rinfo->dir_entries = (void*)__get_free_pages(GFP_KERNEL |
> >                                                              __GFP_NOWARN |
> > @@ -2550,7 +2555,7 @@ int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req,
> >                         break;
> >                 order--;
> >         }
> > -       if (!rinfo->dir_entries)
> > +       if (!rinfo->dir_entries || order < 0)
> >                 return -ENOMEM;
> > 
> >         num_entries = (PAGE_SIZE << order) / size;
> > --
> > 2.51.0
> > 






[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux