> What's the reason for this? Basically, filter out duplicated keys. This is also helpful for not repeating the same "key not found" multiple times, as suggested by Eric [1]. I could also use other data structures for doing that, but I think it would make the code too complex without having a real benefit. > If I query three keys from a script then it is much easier to parse > the output if I know the keys are going to appear in the same order > that they were on the command line. This assumption would be a little bit broken as one can ask an invalid key. In this case, this command will print the error to stderr, and proceed to the next value. > If the command re-orders them my script now has to check the value of > each key which results in a bunch of unnecessary string comparisons > because it cannot determine the key from the position in the output. In cases where the client don't want to compare strings, it is still possible to ask one key at time, just like other Git commands (e.g. git var, git config). Since this command won't return too many values, it would be ok even if the user requests all the possible keys. > While we were producing json output there was a need to de-duplicate > the keys when that output format was selected. However, we no-longer > produce json and in any case de-duplication could have been achieved > without sorting the input keys by using a hash table, or, as there is > a small fixed number of keys, an array that records the keys we've > already seen. I still think that it would over-engineer this command. If I follow this path of returning the values in the same order they were in the command line, I think it would be better to just allow duplicated keys and multiple "key not found" errors for the same unknown key instead of increasing the complexity of this command. What do you think? [1] https://lore.kernel.org/git/CAPig+cTxNUPayO2SdCL-BPtjb2rfr3e3RK=BsQxAiiEAtpBaRg@xxxxxxxxxxxxxx/