Some sources on the Internets (specifically this one) says that std::function use small-closure optimizations, e.g. it do not allocate heap if closure size is lower than some amount of data (link above indicates 16 bytes for gcc)
So I went digging through g++ headers
Looks like whether or not such optimization is applied is decided by this block of code in "functional" header (g++ 4.6.3)
static void
_M_init_functor(_Any_data& __functor, _Functor&& __f)
{ _M_init_functor(__functor, std::move(__f), _Local_storage()); }
and some lines down:
static void
_M_init_functor(_Any_data& __functor, _Functor&& __f, true_type)
{ new (__functor._M_access()) _Functor(std::move(__f)); }
static void
_M_init_functor(_Any_data& __functor, _Functor&& __f, false_type)
{ __functor._M_access<_Functor*>() = new _Functor(std::move(__f)); }
};
e.g if _Local_storage() is true_type, than placement-new is called, otherwise - regular new
defintion of _Local_storage is the folowing:
typedef integral_constant<bool, __stored_locally> _Local_storage;
and __stored_locally:
static const std::size_t _M_max_size = sizeof(_Nocopy_types);
static const std::size_t _M_max_align = __alignof__(_Nocopy_types);
static const bool __stored_locally =
(__is_location_invariant<_Functor>::value
&& sizeof(_Functor) <= _M_max_size
&& __alignof__(_Functor) <= _M_max_align
&& (_M_max_align % __alignof__(_Functor) == 0));
and finally: __is_location_invariant:
template<typename _Tp>
struct __is_location_invariant
: integral_constant<bool, (is_pointer<_Tp>::value
|| is_member_pointer<_Tp>::value)>
{ };
So. as far as I can tell, closure type is neither a pointer nor a member pointer. To verify that I even wrote a small test program:
#include <functional>
#include <iostream>
int main(int argc, char* argv[])
{
std::cout << "max stored locally size: " << sizeof(std::_Nocopy_types) << ", align: " << __alignof__(std::_Nocopy_types) << std::endl;
auto lambda = [](){};
typedef decltype(lambda) lambda_t;
std::cout << "lambda size: " << sizeof(lambda_t) << std::endl;
std::cout << "lambda align: " << __alignof__(lambda_t) << std::endl;
std::cout << "stored locally: " << ((std::__is_location_invariant<lambda_t>::value
&& sizeof(lambda_t) <= std::_Function_base::_M_max_size
&& __alignof__(lambda_t) <= std::_Function_base::_M_max_align
&& (std::_Function_base::_M_max_align % __alignof__(lambda_t) == 0)) ? "true" : "false") << std::endl;
}
and the output is:
max stored locally size: 16, align: 8
lambda size: 1
lambda align: 1
stored locally: false
So, my questions is the following: is intializing std::function with lambda always results with heap allocation? or am I missing something?
As of GCC 4.8.1, the std::function in libstdc++ optimizes only for pointers to functions and methods. So regardless the size of your functor (lambdas included), initializing a std::function from it triggers heap allocation. Unfortunately there is no support for custom allocators either.
Visual C++ 2012 and LLVM libc++ do avoid allocation for any sufficiently small functor.
Note, for this optimization to kick in your functor should fulfill std::is_nothrow_move_constructible. This is to support noexcept std::function::swap(). Fortunately, lambdas satisfy this requirement if all captured values do.
You can write a simple program to check behavior on various compilers:
#include <functional>
#include <iostream>
// noexpect missing in MSVC11
#ifdef _MSC_VER
# define NOEXCEPT
#else
# define NOEXCEPT noexcept
#endif
struct A
{
A() { }
A(const A&) { }
A(A&& other) NOEXCEPT { std::cout << "A(A&&)\n"; }
void operator()() const { std::cout << "A()\n"; }
char data[FUNCTOR_SIZE];
};
int main()
{
std::function<void ()> f((A()));
f();
// prints "A(A&&)" if small functor optimization employed
auto f2 = std::move(f);
return 0;
}
I bet if you added this:
std::cout << "std::__is_location_invariant: " << std::__is_location_invariant<lambda_t>::value << std::endl;
you would get back:
std::__is_location_invariant: 0
At least that's what ideone says.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With