"brian m. carlson" <sandals@xxxxxxxxxxxxxxxxxxxx> writes: I do not know if you want my input (as I wasn't CC'ed), but anyway... > ... We can store them in the > `loose-object-idx`, but since it's not sorted or easily searchable, it's > going to perform really terribly when we store enough of them. Right > now, we read the entire file into two hashmaps (one in each direction) > and we sometimes need to re-read it when other processes add items, so > it won't take much to make it be slow and take a lot of memory. > > For these reasons, I think we need a different datastore for this and > I'd like to solicit opinions on what that should look like. Here are > some things that come to mind: I do not see why loose-object-idx is not sorted in the first place, but to account for new objects getting into the object store, it would not be a viable way forward to maintain a single sorted file. We obviously do not want to keep rewriting it in its entirety all the time, > Some rough ideas of what this could look like: > > * We could repurpose the top-bit of the pack order value in pack index > v3 to indicate an object that's not in the pack (this would limit us > to 2^31 items per pack). Nice to see an effort to see if we can do with a small incremental change, but would a single bit be sufficient to cover all the needs? I suspect that the answer is no, in which case the v3 pack .idx format would need to be further tweaked, but in that case we do not have to resort to such a trick of stealing a single bit from here and abusing it for other purposes. We should just make sure that the new .idx file format can have extensions, unlike older format that has fixed sections in fixed order. If there aren't any radically novel idea, I would imagine that our design would default to have a big base file that is optimized for reading and searching, plus another format that is easier and quicker to write that would overlay, possibly in a way similar to packed and loose refs work? > * We could write some sort of quadratic rollup format like reftable. The mapping between two hash formats is stable and once computed can be cast in stone. Other attributes like the type of each object may fall into the same category. Multi-level roll-up may be overkill for such static data items, especially if consolidation would be a simple "merge two sorted files into one sorted file" operation. As there are some objects for which we need to carry dynamic information, e.g. "we expect not to have this in our object store and that is fine", which may be set for objects immediately behind the shallow-clone boundary, may need to be cleared when the depth of shallowness changes. Would it make sense to store these auxiliary pieces of information in separate place(s)? I suspect that the objects that need these extra bits of information form a small subset of all objects that we need to have the conversion data, so a separate table that is indexed into using the order in the main table may not be a bad way to go.