- A+

Could someone explain this weird looking output on a 32 bit machine?

`#include <stdio.h> int main() { printf("16777217 as float is %.1f/n",(float)16777217); printf("16777219 as float is %.1f/n",(float)16777219); return 0; } `

Output

`16777217 as float is 16777216.0 16777219 as float is 16777220.0 `

The weird thing is that 16777217 casts to a lower value and 16777219 casts to a higher value...

In the IEEE-754 basic 32-bit binary floating-point format, all integers from −16,777,216 to +16,777,216 are representable. From 16,777,216 to 33,554,432, only even integers are representable. Then, from 33,554,432 to 67,108,864, only multiples of four are representable. (Since the question does not necessitate discussion of which numbers are representable, I will omit explanation and just take this for granted.)

The most common default rounding mode is to round the exact mathematical result to the nearest representable value and, in case of a tie, to round to the representable value which has zero in the low bit of its significand.

16,777,217 is equidistant between the two representable values 16,777,216 and 16,777,218. These values are represented as 100000000000000000000000_{2}•2^{1} and 100000000000000000000001_{2}•2^{1}. The former has 0 in the low bit of its significant, so it is chosen as the result.

16,777,219 is equidistant between the two representable values 16,777,218 and 16,777,220. These values are represented as 100000000000000000000001_{2}•2^{1} and 100000000000000000000010_{2}•2^{1}. The latter has 0 in the low bit of its significant, so it is chosen as the result.