Hi! On Fri, Jul 18, 2025 at 05:49:40PM +0200, David Brown wrote: > > > -fwrapv says "I want something that is not valid C code". Code that > > > requires -fwrapv to work as intended is not valid C code (will have UB). > > Not quite - the code could be perfectly good ("valid") C code while having > undefined behaviour depending on the state at the time. Heh sure. The code can be valid as long as it is never used! I already said "Code that requires -fwrapv to work as intended", so we *know* that the code is always used. > (The compiler is free to provide such semantics - the standard > simply says nothing about what happens on signed integer overflow.) The standard says it is UB (anything that isn't define is). GCC upgrades that to IB though, in some cases (only for conversions!) > > It would be just another extension, and one that many compilers already > > enable by default. Even GCC makes casting from unsigned to int defined > > in all cases because doing that in a standard-conforming way is way too > > painful. > > I may be misunderstanding what you wrote here. In cases where something is > undefined in the C standards, a compiler can define the behaviour if it > wants - that does not break standards conformation in any way. Sure, but code that depends on that is not standard code. > Converting from an unsigned integer type to a signed integer type is fully > defined in the C standards if the value can be represented and does not > change. If not (because it is too big), the result is > implementation-defined (or an implementation-defined trap). Yes. So pretty much UB. > gcc does this > by two's complement wrapping and modulo (basically, it generally does > nothing as all its targets are two's complement) - that is entirely > standard-conforming. Sure. But that does not make code that depends on this standard-conforming! > I have read the manuals for a good many different C compilers over the years > (mostly for embedded microcontrollers). I have yet to find one that > documents two's complement wrapping behaviour for signed integer overflow, > other than gcc's description of "-fwrapv". Many compilers /do/ treat signed > overflow as two's complement wrapping - but many more ignore the issue > entirely and simply happen to generate code that acts that way because they > don't have any optimisations that are enabled by reasoning about integer > overflow. Some, such as Microsoft's MSVC, have a only a very few such > optimisations. GCC itself treats signed overflow as the UB it is in many places. Loop optimisation for example: overflows are undefined, so it is free to assume this never happens, all simple counted loops are finite, etc. > It's fine, IMHO, to write code that relies on extensions or > compiler-specific features. (In my line of work, it is unavoidable.) What > is /not/ fine is to rely on such things without the confidence of documented > behaviour. People who know their code will only be compiled with gcc can > safely rely on wrapping overflows if they are sure "-fwrapv" is in effect - > preferably with a #pragma or function __attribute__ so that it is > independent of compiler invocation flags. But they should not rely on it for > other compilers, unless the can show a documented guarantee of the semantic > extension. Of course that is fine, but it should always be documented. I think we violently agree :-) Segher