I have this code in C where I've declared 0.1 as double.This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000
When creating a new LocalDateTime using LocalDateTime.now() on my Mac and Windows machine i get a nano precision of 6 on my Mac and a nano precision of 3 on my Windows machine. Both are running jdk-1.8.0-172.
I have read this - Why Are Floating Point Numbers Inaccurate? So, sometimes precision can be changed in floating point number because of it's representation style (scientific notation with an exponent and a mantissa).
Could someone explain this weird looking output on a 32 bit machine?OutputThe weird thing is that 16777217 casts to a lower value and 16777219 casts to a higher value...