- A+

Im reading the C# in a Nutshell book and it shows this table:

Im having a hard time understanding the table. It says that double takes `64 bits`

of space and it ranges from `10^-324`

to `10^308`

. Decimal takes `128 bits`

of space BUT is also says that it ranges from `10^-28`

to `10^28`

. So what im understanding here is that decimal takes more space but provides a shorter range? this doesnt make much sense in my head since everyone agrees that decimal should be use when precision is required. Also when doing a calculation like = `(1/3)*3`

, the desire result is `1`

, but only `float`

and `double`

give me `1`

, `decimal`

gives me `0.9999`

... So why is `decimal`

more precise? I dont really understand.

what I'm understanding here is that decimal takes more space but provides a shorter range?

Correct. It provides higher precision and smaller range. Plainly if you have a limited number of bits, you can increase precision only by decreasing range!

everyone agrees that decimal should be use when precision is required

Since that statement is false -- in particular, I do not agree with it -- any conclusion you draw from it is not sound.

The purpose of using decimal is not higher precision. It is *smaller representation error*. Higher precision is one way to achieve smaller representation error, but decimal does not achieve its smaller representation error by being higher precision. It achieves its smaller representation error by *exactly representing decimal fractions*.

Decimal is for those scenarios where the representation error of a decimal fraction must be *zero*, such as a financial computation.

Also when doing a calculation like = (1/3)*3, the desire result is 1, but only float and double give me 1

You got lucky. There are lots of fractions where the representation error of that computation is non-zero for both floats and doubles.

If your desire is to do exact arithmetic on arbitrary rationals then neither double nor decimal are the appropriate type to use. Use a big-rational library if you need to exactly represent big rationals!

why is decimal more precise?

Decimal is more precise than double because it has more bits of precision.

But again, precision is not actually that relevant. What is relevant is that decimal has smaller *representation error* than double for many common fractions.

It has smaller representation error than double for representing fractions with a small power of ten in the denominator because it was *designed specifically* to have zero representation error for all fractions with a small power of ten in the denominator.

That's why it is called "decimal", because it represents *fractions with powers of ten*. It represents the *decimal system*, which is the system we commonly use for arithmetic.

Double, in contrast, was explicitly not designed to have small representation error. **Double was designed to have the range, precision, representation error and performance that is appropriate for physics computations.**

There is no bias towards exact decimal quantities in physics. There is such a bias in finance. **Use decimals for finance. Use doubles for physics.**