Phillip Wood wrote: > On 26/05/2025 13:44, Patrick Steinhardt wrote: >> >> I don't think it's inherently a bad thing to fail on unexpected passes. >> After all, it shows that our assumption that the test fails is broken, >> and that we should have a look why that is. But I can see arguments both >> ways. > > Personally I'd be very happy if our test suite failed on an unexpected pass. > Currently it is easy to miss, especially if the unexpected pass occurs in a > CI run. Missing an unexpected pass means we don't change > 'test_expect_failure' to 'test_expect_pass' and a future regression that > causes the test to fail again will go unnoticed. Indeed. Perhaps related (apologies if it's a wild tangent), having a way to expose an unexpectedly failed prereq would be nice. For example, we currently (well, last time I checked, which was a month or so ago) fail the GPG2 prereq. I submitted small patch series to fix that nearly a year ago¹, but when I ran the tests in our CI, they turned up some preexisting failures. I spent a little time trying to reproduce and resolve the failures, but was never able to make it work. These tests pass when run locally which makes it painful to track down. It would have been ideal if they failed when added, so that it could have been worked out during the review period, while it was fresh in the minds of the folks working in that area. We are simply not noticing these failures in our CI, which feels worse than simply not having test coverage. It gives a false of security that t/t1016-compatObjectFormat.sh is passing. In reality, the tests might be reporting a real issue that we've been missing for ages. ¹ <20240703153738.916469-1-tmz@xxxxxxxxx> -- Todd