Re: Running out of inodes on an NFS which stores repos

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2025-09-06 at 14:16:12, Kousik Sanagavarapu wrote:
> Hello everyone,

Hi,

> These git repos come from another service and there are typically
> thousands of them each day. It is important to note that we only store
> the .git dir and expose a url which is configured as the remote by
> default to read and write into this repo.
> 
> All of these are small repos; usually not many files and not many
> commits too - I'd say ~5 commits on average.
> 
> Historically, when we ran out of inodes, we had implemented a few
> strategies where we used to repack the objects or archive the older
> repos and move them into another store and bring them back into this
> NFS and unarchive the repo.
> 
> However, none of these totally mitigated the issue and we still run
> into issue as the traffic increases. As a last resort,  we increased
> the disk size even though there was ton of free space left - just
> for increasing the number of inodes.
> 
> We can't delete any of these repos, no matter how old, because they are
> valuable data.
> 
> I was wondering if there was some other strategy that we could implement
> here as this seems like a problem that people might often run into. It
> would really help to here your thoughts or if you could point me to
> anywhere else.

There are a couple things that come to mind here.  You can try to set
`fetch.unpackLimit` to 1, which will cause of the objects pushed into
the repository to end up in a pack.  That means you'll usually have
only two files, the pack and index, rather than the loose objects.

If you have a large number of references, you may wish to convert the
repositories to use the reftable backend instead of the files backend
(via `git refs migrate --ref-format=reftable`), which will also tend to
use fewer files on disk.  Note that this requires a relatively new Git,
so if you need to access these repositories with an older Git version,
don't do this.

You can also periodically repack more frequently if you set
`gc.autoPackLimit` to a smaller number (in conjunction with
`fetch.unpackLimit` above).  If you have repositories that are not
packed at all, running `git gc` (or, if you don't want to remove any
objects, `git repack -d --cruft`), which will likely reduce the number
of loose objects and result in more objects being packed.

Finally, it may be useful to you to reformat the underlying file system
in a way that has more inodes.  I know ext4 supports a larger inode
ratio for repositories with many small files.  Alternatively, apparently
btrfs does not have a fixed inode ratio, so that may be helpful to avoid
running out of inodes.  I can't speak to non-Linux file systems, though.
-- 
brian m. carlson (they/them)
Toronto, Ontario, CA

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux