Why does 'undefined behaviour' exist? [duplicate]

  • A+
Category:Languages

This question already has an answer here:

Certain common programming languages, most notably C and C++, have the strong notion of undefined behaviour: When you attempt to perform certain operations outside of the way they are intended to be used, this causes undefined behaviour.

If undefined behaviour occurs, a compiler is allowed to do anything (including nothing at all, 'time traveling', etc.) it wants.

My question is: Why does this notion of undefined behaviour exist? As far as I can see, a huge load of bugs, programs that work one one version of a compiler stop working on the next, etc. would be prevented if instead of causing undefined behaviour, using the operations outside of their intended use would cause a compilation error.

Why is this not the way things are?

 


Why does this notion of undefined behaviour exist?

To allow the language / library to be implemented on a variety of different computer architectures as efficiently as possible (- and perhaps in the case of C - while keeping the implementation simple).

if instead of causing undefined behaviour, using the operations outside of their intended use would cause a compilation error

In most cases of undefined behaviour, it is impossible - or prohibitively expensive in resources - to prove that undefined behaviour exists at compile time for all programs in general.

Some cases are possible to prove for some programs, but it's not possible to specify which of those cases are exhaustively, and so the standard won't attempt to do so. Nevertheless, some compilers are smart enough to recognize some simple cases of UB, and those compilers will warn the programmer about it.

More typical alternative to having undefined behaviour would be to have defined error handling in such cases, such as throwing an exception (compare for example Java, where accessing a null reference causes an exception of type java.lang.NullPointerException to be thrown). But checking for the pre-conditions of well defined behaviour is slower than not checking it.

By not checking for pre-conditions, the language gives the programmer the option of proving the correctness themselves, and thereby avoiding the runtime overhead of the check in a program that was proven to not need it. Indeed, this power comes with a great responsibility.

These days the burden of proving the program's well-definedness can be somewhat alleviated by using tools (example) which add some of those runtime checks, and neatly terminate the program upon failed check.

Comment

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: