When you have two or more objects with object names that share more than half the length of the hash algorithm in use (e.g. 10 bytes for SHA-1 that produces 20-byte/160-bit hash), find_unique_abbrev() fails to show disambiguation. To see how many leading letters of a given full object name is sufficiently unambiguous, the algorithm starts from a initial length, guessed based on the estimated number of objects in the repository, and see if another object that shares the prefix, and keeps extending the abbreviation. The loop stops at GIT_MAX_RAWSZ, which is counted as the number of bytes, since 5b20ace6 (sha1_name: unroll len loop in find_unique_abbrev_r(), 2017-10-08); before that change, it extended up to GIT_MAX_HEXSZ, which is the correct limit because the loop is adding one output letter per iteration. Signed-off-by: Junio C Hamano <gitster@xxxxxxxxx> --- * No tests added, since I do not think I want to find two valid objects with their object names sharing the same prefix that is more than 20 letters long. The current abbreviation code happens to ignore validity of the object and takes invalid objects into account when disambiguating, but I do not want to see a test rely on that. Git 2.15 (that predates 5b20ace6 which is in Git 2.16) does the right thing, even though it is a bit too old codebase to build these days (I had to omit curl and pcre, as I didn't need to get them working again only to see how its disambiguation code works) with the up-to-date libraries. object-name.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git c/object-name.c w/object-name.c index 11aa0e6afc..13e8a4e47d 100644 --- c/object-name.c +++ w/object-name.c @@ -704,7 +704,7 @@ static int extend_abbrev_len(const struct object_id *oid, void *cb_data) while (mad->hex[i] && mad->hex[i] == get_hex_char_from_oid(oid, i)) i++; - if (i < GIT_MAX_RAWSZ && i >= mad->cur_len) + if (i < GIT_MAX_HEXSZ && i >= mad->cur_len) mad->cur_len = i + 1; return 0;