Johannes Schindelin <Johannes.Schindelin@xxxxxx> writes: > .... From my > point of view, Git is spending way more compute than is warranted. The way > Git's CI builds are set up, in many cases a single regression will cause > many tests/jobs to fail, and that indicates to me that Git's CI definition > (and even Git's test suite) contains too many redundant parts. While I also feel frustrated by watching paint dry after pushing day's integration results out, and often seeing that multiple CI jobs fail due to the same breakage in 'seen' I do feel if there are ways to avoid such waste, I do not think of a good way to do so [*]. Are there some concrete proposals? Thanks. [Footnote] * For example, if gitlab-ci and github-ci run the same CI jobs on the same exact revision of Git using the same exact docker image, if there is no reason to expect one to succeed and one to fail, perhaps we can drop one and keep the other? Or perhaps we pick a single representative job and only after it passes start other jobs? None of the tweaks along these lines I can think of feel satisfying to me.