On Wed, May 21, 2025 at 04:19:59PM +0100, Phillip Wood wrote: > > > expected_size /= p->num_objects; > > > > > > if (expected_size >= batch_size) > > > continue; > > > > > > - total_size += expected_size; > > > + if (unsigned_add_overflows (total_size, (size_t)expected_size)) > > > + total_size = SIZE_MAX; > > > + else > > > + total_size += expected_size; > > > + > > > > But this part I am not totally following. Here we have 'total_size' > > declared as a size_t, and 'expected_size' as a uint64_t, and (on 32-bit > > systems) down-cast to a 32-bit unsigned value. > > > > So if 'expected_size' is larger than SIZE_MAX, we should set > > 'total_size' to SIZE_MAX. But that may not happen, say if > > 'expected_size' is (2^32-1<<32). Should total_size also be declared as a > > uint64_t here? > > By this point we know that expected_size < SIZE_MAX due to the test in the > context lines above this change. batch_size is declared as size_t and to get > here expected_size < batch_size. I'll add a sentence to the commit message > to make that clearer. Ahh... makes sense. I don't think a comment is necessary, this should have been obvious. The check you're referring to gives us the fact that expected_size < batch_size <= SIZE_MAX So we're OK here; sorry for missing that! Thanks, Taylor