- A+

Given that `x`

is a variable of type `int`

with the number `5`

as its value, consider the following statement:

`int y = !!x; `

This is what I think it happens: `x`

is implicitly casted to a `bool`

and the first negation is executed, after that the last negation is made, so a cast and two negations.

My question is, isn't just casting to bool (executing `int y = (bool)x;`

instead of `int y = !!x`

) faster than using double negation, as you are saving two negations from executing.

I might be wrong because I see the double negation a lot in the Linux kernel, but I don't understand where my intuition goes wrong, maybe you can help me out.

There was no bool type when Linux was first written. The C language treated everything that was not zero as true in Boolean expressions. So 7, -2 and 0xFF are all "true". No bool type to cast to. The double negation trick ensures the result is either zero or whatever bit pattern the compiler writers chose to represent true in Boolean expressions. When you're debugging code and looking at memory and register values, it's easier to recognize true values when they all have the same bit patterns.

Addendum: According the C89 draft standard, section 3.3.3.3:

The result of the logical negation operator ! is 0 if the value of its operand compares unequal to 0, 1 if the value of its operand compares equal to 0. The result has type int . The expression !E is equivalent to (0==E).

So while there was no Boolean type in the early days of the Linux OS, the double negation would have yielded either a 0 or a 1 (thanks to Gox for pointing this out), depending on the truthiness of the expression. In other words any bit pattern in the range of MIN_INT..-1 and 1..MAX_INT would have yielded a 1 and the zero bit pattern is self-explanatory.