> I've learned that entries in the index file "are > sorted in ascending order on the name field". Correct the index maintains `struct cache_entry` entries in sorted order by the file name. This is essential for fast lookup, diffing, and pathspec-based operations. > Am I right in thinking that this means that > every time a file is added to the index by > running "git add" the whole index file must > be resorted? Not exactly. Git keeps the index in memory as a sorted array, so adding a new entry doesn't require a full resort just a binary search to find the right insertion point. Only when the index is written to disk does it serialize the in-memory array, which is already sorted. > If so, this seems like a lot of > work, especially since not all the entries > are the same size. That's true in principle, but in practice, the memory layout of `cache_entry` objects and memory mapping makes it quite efficient, especially since typical index sizes are modest. > Has any thought been made about improving this, > such as perhaps having an "index index"? This > would be a separate file that contains the name > field of each entry, the location of where the entry > starts in the index, and the length of the entry. > I'll call this a partial index entry. Interesting idea effectively you're describing a secondary structure like a sparse index or log-structured merge pattern. It has its appeal, particularly for large repositories with high-churn working trees. > With this approach, running "git add" would simply > append a full index entry to the index, and > append the partial entry to the "index index", which > would then be sorted. The full index would not be > sorted. I'm guessing this is the common path. This would reduce write amplification for `git add`, but it comes at a cost: many Git operations rely on the index being sorted. Reads would have to scan or use your "index index", which introduces more I/O and complexity. > To delete a file from the index, I'd propose adding an > "deleted" bit to the full cache entry. When "git rm --cached" > is run, 2 things would happen: > > 1) The "deleted" bit would be turned on in the full index > entry for the file. This assumes laziness in cleanup, which is reasonable for append-only systems. But Git today avoids keeping dead entries around for clarity and correctness (especially under concurrent access). > 2) The "index index" would be modified by removing the > partial entry for the file. This makes sense for maintaining the primary lookup structure. > One drawback of this approach would be that since the "index index" > entries also won't be the same length, sorting it will still require > extra work. However, this wouldn't be any harder then sorting the full > index, and a lot less data wouldn't have to be moved around. Agreed but it's worth noting that sorting a relatively small in-memory structure (as Git does now) is often cheaper than maintaining two files in sync (your full index and index index). > All this is so simple that I suspect that it's been considered before. > Am I missing something? You’re not missing much in fact, Git has features like the "split index", "untracked cache", and "index v4" that address similar performance issues through other means. Your idea would likely help in edge cases (very large repos, massive parallelism), but the added complexity and I/O overhead of maintaining multiple files likely outweighs the benefits for the common case. > P.S. I'm trying to read the Git source code to get a better handle > on what actually goes on in the index but this is taking some time. You’re in good company. The index code lives mostly in `read-cache.c`. Definitely a dense but rewarding part of the Git source tree to explore. Maybe you can start from functions like read_index_from() do_write_index() discard_index(). I think that would be an amazing start.