Why does double in C print fewer decimal digits than C++?

  • A+
Category:Languages

I have this code in C where I've declared 0.1 as double.

#include <stdio.h>  int main() {     double a = 0.1;      printf("a is %0.56f/n", a);     return 0; } 

This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000

Same code in C++,

#include <iostream> using namespace std; int main() {     double a = 0.1;      printf("a is %0.56f/n", a);     return 0; } 

This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625

What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?

Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?

 


With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.

This is a pretty weird case of Undefined Behavior.

In the C++ code change <iostream> to <stdio.h>, to get valid C++ code, and you get the same result as with the C program.


Why does the C++ code even compile?

Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream> header drags in some declaration of printf. Just not an entirely correct one.


Regarding

Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?

... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.

Comment

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: