On Fri, Aug 15, 2025 at 11:27:45AM -0400, Derrick Stolee wrote: Sorry, a bit late to the party due to various things going on that distracted me for the last couple weeks. > On 8/13/2025 9:09 PM, brian m. carlson wrote: > > TL;DR: We need a different datastore than a flat file for storing > > mappings between SHA-1 and SHA-256 in compatibility mode. Advice and > > opinions sought. > ...> Our approach for mapping object IDs between algorithms uses data in pack > > index v3 (outlined in the transition document), plus a flat file called > > `loose-object-idx` for loose objects. However, we didn't anticipate > > that we'd need to handle mappings long-term for data that is neither a > > loose object nor a packed object. > > I'm generally not a fan of this approach to (ab)use the pack index format > for this, especially when the translation needs to expand beyond "objects > in the local repo". > > The requirements, as I see them, are: > > 1. Given an OID in Hash1, load its mapped OID in Hash2 in O(log N) time. > 2. Given an OID in Hash2, load its mapped OID in Hash1 in O(log N) time. > 3. As OID pairs are discovered, add them to the data structure in ~O(new) > time. > > > Some rough ideas of what this could look like: > > > > * We could repurpose the top-bit of the pack order value in pack index > > v3 to indicate an object that's not in the pack (this would limit us > > to 2^31 items per pack). > > * We could put this in new entries in multi-pack index and require that > > (although I'm not sure that I love the idea of requiring multi-pack > > index in all repositories and I have yet to implement compatibility > > mode there). > > * We could write some sort of quadratic rollup format like reftable. > > My thought is that the last option is going to be best. Yeah, agreed. One thing that we tend to always end up with eventually is using geometric sequences for repacking data structures. Because ultimately, the bigger a repository grows the more expensive it'll become over time to rewrite the base file. And furthermore, there's always going to be use cases where rewriting the base file needs to happen a whole lot more frequently than one would reasonably expect. > It does require starting a new file format from scratch, but it > doesn't need to be complicated: > > * Header information includes: > - file version info. > - hash versions in mapping. > - the number of OIDs in the format > - the previous mapping file(s) in the chain > - offsets to the Hash1 and Hash2 tables. > - room for expansion to other data being added to the format, > as necessary in the future. > * Hash1 table has a lex-ordered list of Hash1 OIDs and int IDs to do > lookups of the mapped Hash2 OIDs from the second table (by position). > * Hash2 table has a lex-ordered list of Hash2 OIDs and int IDs to do > lookups of the mapped Hash1 OIDs from the first table (by position). I was wondering whether we need to fully reinvent the wheel here. We already use our chunk format for multiple different data formats (commit graphs, MIDX), so maybe we can also reuse it for this type of mapping? > Lookup time would be O(L * log N) where L is the number of layers in > the collection of files. Writing time could be as low as the size of > a new layer on top, with squashing of layers handled in the background > or in the foreground (opportunistically for small layers or as needed > if background maintenance is not available). > > I'm sure that things are more complicated than I'm making it out to > be in this email. I haven't looked at your branch to see the subtle > details around this. Hopefully this just gives you ideas that you > can use as you compare options. Likewise, and I'm very sure that due to me being late the ship has already sailed :) Patrick