Why is sizeof( std::variant< char > ) == 8 when using libc++ and not 2 (like with MSVC's STL and libstdc++)?

  • A+
Category:Languages

Consider this example on Compiler explorer.

Basically, we have this code snippet:

#include <cstdint> #include <variant>  enum class Enum1 : std::uint8_t { A, B };  enum class Enum2 : std::uint8_t { C, D };  using Var = std::variant< Enum1, Enum2 >; using Var2 = std::variant< char >;  template< std::size_t s > struct print_size;  void func() {     print_size< sizeof( Var ) >{};     print_size< sizeof( Var2 ) >{}; } 

If we compile this with GCC's libstdc++ (using either clang or GCC), we get the expected compile error:

error: implicit instantiation of undefined template 'print_size<2>' 

Also, similar with MSVC (as expected):

error C2027: use of undefined type 'print_size<2>' 

However, when using clang with libc++, I get this error:

error: implicit instantiation of undefined template 'print_size<8>' 

This indicates that sizeof( std::variant< char > ) == 8 when using libc++. I've confirmed that on Linux (see compiler explorer link above), but also with Android's NDK r18 and Xcode 10 (both iOS and MacOS).

Is there a reason for libc++'s implementation of std::variant to use so much memory or is this simply a bug in libc++ and should be reported to libc++ developers?

 


The reason seems to be that in libc++'s original std::variant implementation an unsigned int is always used to store the active type index of the std::variant, while libstdc++ chooses the smallest unsigned integer type able to store the largest index.

In current libc++ this optimization is also available, but it doesn't seem to be enabled by default. The macro enabling the optimization (_LIBCPP_ABI_VARIANT_INDEX_TYPE_OPTIMIZATION) is only set if _LIBCPP_ABI_VERSION >= 2 or _LIBCPP_ABI_UNSTABLE is defined.

I suppose that, because the original implementation didn't make this optimization and it breaks compatibility of std::variant in both directions due to data layout change, it wasn't enabled by default to keep binary compatibility with older versions. The newer ABI can be enabled by setting the mentioned ABI version macro, but of course all libraries will need to be compiled with this new ABI version as well.

See https://reviews.llvm.org/D40210

Comment

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: