Running out of inodes on an NFS which stores repos

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone,
At my $(DAYJOB), we have an NFS which stores different git repos.
Due to how git stores objects, we have started to run out of inodes on
the NFS as the number of repos coming into the NFS increased.

These git repos come from another service and there are typically
thousands of them each day. It is important to note that we only store
the .git dir and expose a url which is configured as the remote by
default to read and write into this repo.

All of these are small repos; usually not many files and not many
commits too - I'd say ~5 commits on average.

Historically, when we ran out of inodes, we had implemented a few
strategies where we used to repack the objects or archive the older
repos and move them into another store and bring them back into this
NFS and unarchive the repo.

However, none of these totally mitigated the issue and we still run
into issue as the traffic increases. As a last resort,  we increased
the disk size even though there was ton of free space left - just
for increasing the number of inodes.

We can't delete any of these repos, no matter how old, because they are
valuable data.

I was wondering if there was some other strategy that we could implement
here as this seems like a problem that people might often run into. It
would really help to here your thoughts or if you could point me to
anywhere else.

Thanks




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux