Are there any non-readability related reasons to NOT specifically use fixed width integers every single time?

  • A+

Let's say that we have uint_least8_t var, where, hypothetically speaking, var won't possibly ever exceed the value 255. I know that's not how programming works and "possibly" and "ever" are a blasphemy, but, aside from complicating the code and making it less readable, what makes always using fixed width integers a bad idea?


Performance is another reason.

Narrow operands require additional narrowing/widening instructions. This can't always be optimized away without side effects. And sometimes the optimizer just isn't smart enough and plays it safe.

Take the following contrived example.

#include <iostream> #include <chrono>  using namespace std; using namespace std::chrono_literals;  int main() {     auto tm1 = chrono::high_resolution_clock::now();     unsigned int n = 0;     unsigned int x = 0;  // though, uint8_t would have been enough!     for (unsigned int i = 0; i < 1000000000; i++) {         n += (x * i);         x = (n + 1) & 0x7F;     }     auto tm2 = chrono::high_resolution_clock::now();     cout << n << ", " << (tm2 - tm1) / 1.0s << " s" << endl; } 

If we change the type of x from unsigned int to uint8_t, the application becomes 15% slower (2s instead of 1.7s run time on x86-64 when compiled with GCC 7.2 -O3 full optimization on).

Assembly with a 32-bit x:

.L2:   imul eax, edx   inc edx   add ebx, eax   lea eax, [rbx+1]   and eax, 127   cmp edx, 1000000000   jne .L2 

Assembly with an 8-bit x:

.L2:   movzx eax, al    ; owww!   imul eax, edx   inc edx   add ebp, eax   lea eax, [rbp+1]   and eax, 127   cmp edx, 1000000000   jne .L2 


:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: