On September 6, 2025 10:16 AM, Kousik Sanagavarapu wrote: >Hello everyone, >At my $(DAYJOB), we have an NFS which stores different git repos. >Due to how git stores objects, we have started to run out of inodes on the NFS as >the number of repos coming into the NFS increased. > >These git repos come from another service and there are typically thousands of >them each day. It is important to note that we only store the .git dir and expose a url >which is configured as the remote by default to read and write into this repo. > >All of these are small repos; usually not many files and not many commits too - I'd >say ~5 commits on average. > >Historically, when we ran out of inodes, we had implemented a few strategies >where we used to repack the objects or archive the older repos and move them into >another store and bring them back into this NFS and unarchive the repo. > >However, none of these totally mitigated the issue and we still run into issue as the >traffic increases. As a last resort, we increased the disk size even though there was >ton of free space left - just for increasing the number of inodes. > >We can't delete any of these repos, no matter how old, because they are valuable >data. > >I was wondering if there was some other strategy that we could implement here as >this seems like a problem that people might often run into. It would really help to >here your thoughts or if you could point me to anywhere else. I would suggest running git gc --aggressive on your repos. This might help compress your pack files. I have seen customers with thousands of pack files who have never run a garbage collection. Another thing you might want to try is to use sparse-checkout to only keep the directories you absolutely need if that is an option. Also, check your /tmp and lost+found directories.