Re: Efficiently storing SHA-1 ↔ SHA-256 mappings in compatibility mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2025-08-14 at 14:22:18, Junio C Hamano wrote:
> "brian m. carlson" <sandals@xxxxxxxxxxxxxxxxxxxx> writes:
> 
> I do not know if you want my input (as I wasn't CC'ed), but anyway...
> 
> > ...  We can store them in the
> > `loose-object-idx`, but since it's not sorted or easily searchable, it's
> > going to perform really terribly when we store enough of them.  Right
> > now, we read the entire file into two hashmaps (one in each direction)
> > and we sometimes need to re-read it when other processes add items, so
> > it won't take much to make it be slow and take a lot of memory.
> >
> > For these reasons, I think we need a different datastore for this and
> > I'd like to solicit opinions on what that should look like.  Here are
> > some things that come to mind:
> 
> I do not see why loose-object-idx is not sorted in the first place,
> but to account for new objects getting into the object store, it
> would not be a viable way forward to maintain a single sorted file.
> We obviously do not want to keep rewriting it in its entirety all
> the time,

It's not sorted because there's no way to do so and efficiently handle
both lookups.  If we sorted it in SHA-256 order, then we would still
have to look up items in SHA-1 order with a linear search, and vice
versa.

What we do for pack index v3 is a sorted table of abbreviated names, a
mapping of that order to pack order, and then full object names in pack
order, with a set for each algorithm.  The abbreviated names all use the
same prefix size, which is just long enough to be unambiguous.  This
means that we can easily look up an object, find its index into pack
order, and then find the full object ID in any algorithm.

We could probably write some sort of data file that contains these
same mappings except that since we don't have a pack order, we could
just use a sorted order in the main algorithm and omit the main
algorithm's mapping table.  We could then have a single table for the
necessary object metadata.

> If there aren't any radically novel idea, I would imagine that our
> design would default to have a big base file that is optimized for
> reading and searching, plus another format that is easier and
> quicker to write that would overlay, possibly in a way similar to
> packed and loose refs work?

Yeah, that could be an option.  Or we could have a base file and some
incrementals, with a `git gc` when we hit 50 items, just like when we
hit 50 packfiles.

> As there are some objects for which we need to carry dynamic
> information, e.g. "we expect not to have this in our object store
> and that is fine", which may be set for objects immediately behind
> the shallow-clone boundary, may need to be cleared when the depth of
> shallowness changes.  Would it make sense to store these auxiliary
> pieces of information in separate place(s)?  I suspect that the
> objects that need these extra bits of information form a small
> subset of all objects that we need to have the conversion data, so a
> separate table that is indexed into using the order in the main
> table may not be a bad way to go.

My plan is to just wire this up to `git gc`.  We'd know what entries are
potentially disposable (such as shallows) and omit the unneeded entries
when repacking.
-- 
brian m. carlson (they/them)
Toronto, Ontario, CA

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux