Hi! On Fri, Jul 18, 2025 at 06:39:41PM +0200, David Brown wrote: > > While this is true in principle, it's not how -fwrapv (or undefined signed > > overflow) is implemented in GCC. When writing optimizations, you have > > to be careful not to introduce signed overflow that was not present in > > the original code because there aren't separate tree operations for > > wrapping and overflowing operations. > > > > There aren't many examples like this in the code base today, perhaps > > because -fwrapv is not the default and any such optimization would not > > get used much. But here's one: There isn't much in the GCC code that deals with -fwrapv at all. "grep flag_wrapv" finds 29 occurrences only, and 3 more in config/ (all for Andes). > I am not at all well-versed in the internals of GCC, so I don't know what is > going on in that code. But I am not aware of any situation where using > wrapping instructions could introduce new overflows that made it through to > the final answer. In any combination of additions and multiplications, it > doesn't matter when you (logically) apply the modulo operation to limit your > range to your bit size - the result is the same. > > But what /could/ happen is that you have extra intermediary overflows. If > you have "-ftrapv" in action, then "a * (x - y)" and "(a * x) - (a * y)" can > have different behaviour if there are overflows in the intermediary parts. > > However, when "-ftrapv" is not in effect, I cannot see how "-fwrapv" allows > any extra optimisations. Yup, exactly. > Are you able to give an example of the C code for which the optimisation > above applies, and values for which the result is affected? (When thinking > about overflows, I always like to use 16-bit int because the numbers are > smaller and easier to work with.) 16 bits? Why so big! :-) > Conversion to signed integer types is implementation-defined behaviour in > the C standards, not undefined behaviour. That means the compiler must pick > a specific tactic which is documented (in section 4.5 of the gcc manual) and > applied consistently. It is not undefined behaviour - code that relies on > two's complement conversion of unsigned types to signed types is not > incorrect code, merely non-portable code. (In practice, of course, it is > portable, as all real-world compilers use the same tactic on two's > complement targets.) Leaving it as UB is a correct implementation. If that is documented it is IB then, but what is the difference here :-) Segher