Jeff King <peff@xxxxxxxx> writes: > But do we actually care about eventually having a series of 40MB packs? > Or do we care about having some cutoff so that we don't rewrite those > first 40MB on subsequent repacks? > > If the latter, then for step 2, what if we don't feed a max size? We'd > end up with one 60MB pack (again, having written all of the bytes once). > And on the next repack we'd leave it be (since it's over the threshold). > We'll start forming new packs, which will eventually aggregate to 40MB > (or possibly larger). > > If I understand the main purpose of the series, it is that we must > rescue objects out of cruft packs if they became fresher (by having > loose copies made). But that is orthogonal to max pack sizes, isn't it? > We just need for pack-objects to be fed those objects (which should be > happening already) and decide _not_ to omit them based on their presence > in the kept cruft packs (based on the mtime in those cruft packs, of > course). Which looks like what your want_cruft_object_mtime() is doing. Yeah, no packsize limit on the output side, but making sure that the decision to roll up existing cruft packs is made sensibly, is what the above gives us, which I life a lot. The one-before and -after confusion came exactly because we somehow tried to have threshold on the output side.