On 18/07/2025 16:32, Florian Weimer via Gcc-help wrote:
* Segher Boessenkool:
On Mon, Jul 14, 2025 at 03:03:46PM -0700, Florian Weimer wrote:
* Segher Boessenkool:
-fwrapv is a great way to get slower code, too. Is there something in
your code that does not work without this reality-distorting flag?
It really depends on the code. In many cases, -fwrapv enables
additional optimizations. For example, it's much easier to use (in C
code) the implicit sign bit many CPUs compute for free.
"-fwrapv" in itself does not enable any additional optimisations as far
as I know. In particular, any time you don't have that flag activated,
and the compiler could generate more efficient code by using wrapping
behaviour for two's complement arithmetic, then it is free to do so -
since signed overflow is undefined behaviour in C, the compiler can
treat it as defined to wrap if that's what suits.
But "-fwrapv" does let you write certain kinds of code in a more
convenient form than if it were not active. Since signed arithmetic
overflow is normally UB, you have to "look before you leap" if overflow
is a possibility. With "-fwrap", in some cases, you can leap first and
check for overflow afterwards. Alternative ways to handle such
situations are gcc's __builtin_add_overflow() or C23's chk_add() and
related functions.
-fwrapv says "I want something that is not valid C code". Code that
requires -fwrapv to work as intended is not valid C code (will have UB).
Not quite - the code could be perfectly good ("valid") C code while
having undefined behaviour depending on the state at the time. When x,
y and z are ints, "x = y + z;" is valid C code with defined behaviour,
unless it has an overflow at run-time. But yes, if code relies on
"-fwrap" to work as intended, it is relying on additional semantics
beyond those given in the C standard. (The compiler is free to provide
such semantics - the standard simply says nothing about what happens on
signed integer overflow.)
It would be just another extension, and one that many compilers already
enable by default. Even GCC makes casting from unsigned to int defined
in all cases because doing that in a standard-conforming way is way too
painful.
I may be misunderstanding what you wrote here. In cases where something
is undefined in the C standards, a compiler can define the behaviour if
it wants - that does not break standards conformation in any way.
Converting from an unsigned integer type to a signed integer type is
fully defined in the C standards if the value can be represented and
does not change. If not (because it is too big), the result is
implementation-defined (or an implementation-defined trap). gcc does
this by two's complement wrapping and modulo (basically, it generally
does nothing as all its targets are two's complement) - that is entirely
standard-conforming.
I have read the manuals for a good many different C compilers over the
years (mostly for embedded microcontrollers). I have yet to find one
that documents two's complement wrapping behaviour for signed integer
overflow, other than gcc's description of "-fwrapv". Many compilers
/do/ treat signed overflow as two's complement wrapping - but many more
ignore the issue entirely and simply happen to generate code that acts
that way because they don't have any optimisations that are enabled by
reasoning about integer overflow. Some, such as Microsoft's MSVC, have
a only a very few such optimisations.
It's fine, IMHO, to write code that relies on extensions or
compiler-specific features. (In my line of work, it is unavoidable.)
What is /not/ fine is to rely on such things without the confidence of
documented behaviour. People who know their code will only be compiled
with gcc can safely rely on wrapping overflows if they are sure
"-fwrapv" is in effect - preferably with a #pragma or function
__attribute__ so that it is independent of compiler invocation flags.
But they should not rely on it for other compilers, unless the can show
a documented guarantee of the semantic extension.