Carlo Marcelo Arenas Belón <carenas@xxxxxxxxx> writes: > - length = sizeof(int64_t); > - if (!sysctl(mib, 2, &physical_memory, &length, NULL, 0)) > + length = sizeof(physical_memory); > + if (!sysctl(mib, 2, &physical_memory, &length, NULL, 0)) { > + if (length < sizeof(physical_memory)) { > + unsigned bits = (sizeof(physical_memory) - length) * 8; > + > + physical_memory <<= bits; > + physical_memory >>= bits; I do not quite understand this version. Does the correctness of this depend on the machine having a certain byte-order? The system call treats &physical_memory as a mere blob of bytes, and may tell us that it filled only 4 bytes out of 8, but depending on the endianness, left shifting 4*8 bits first may discard the real information (i.e., big endian). On a little endian 32-bit box, it might give us length == 4, filling the lower half of the i64, and shifting by 32-bits to the left and then shifting it down by 32-bits to the right may fill the upper half with 1 if the result in the 4-byte long is more than 2GB because the type of physical_memory is signed, and then we cast that value to u64. Which does not sound correct, either. Would it make more sense to pass &u64 and return it only when length==8 as you did in v2 while removing the need to cast? > + } > return physical_memory; > + } > #elif defined(GIT_WINDOWS_NATIVE) > MEMORYSTATUSEX memInfo;