Short question.
I just got a dll I'm supposed to interface with. Dll uses crt from msvcr90D.dll (notice D), and returns std::strings, std::lists, and boost::shared_ptr. Operator new/delete is not overloaded anywhere.
I assume crt mixup (msvcr90.dll in release build, or if one of components is rebuilt with newer crt, etc) is bound to cause problems eventually, and dll should be rewritten to avoid returning anything that could possibly call new/delete (i.e. anything that could call delete in my code on a block of memory that was allocated (possibly with different crt) in dll).
Am I right or not?
The main thing to keep in mind is that dlls contain code and not memory. Memory allocated belongs to the process(1). When you instantiate an object in your process, you invoke the constructor code. During that object's lifetime you will invoke other pieces of code(methods) to work on that object's memory. Then when the object is going away the destructor code is invoked.
STL Templates are not explicitly exported from the dll. The code is statically linked into each dll. So when std::string s is created in a.dll and passed to b.dll, each dll will have two different instances of the string::copy method. copy called in a.dll invokes a.dll's copy method... If we are working with s in b.dll and call copy, the copy method in b.dll will be invoked.
This is why in Simon's answer he says:
Bad things will happen unless you can always guarantee that your entire set of binaries is all built with the same toolchain.
because if for some reason, string s's copy differs between a.dll and b.dll, weird things will happen. Even worse if string itself is different between a.dll and b.dll, and the destructor in one knows to clean extra memory that the other ignores... you can have difficult to track down memory leaks. Maybe even worse... a.dll might have been built against a completely different version of the STL (ie STLPort) while b.dll is built using Microsoft's STL implementation.
So what should you do? Where we work, we have strict control over the toolchain and build settings for each dll. So when we develop internal dll's, we freely transfer STL templates around. We still have problems that on rare occasion crop up because someone didn't correctly setup their project. However we find the convenience of the STL worth the occasional problem that crops up.
For exposing dlls to 3rd parties, that's another story entirely. Unless you want to strictly require specific build settings from clients, you'll want to avoid exporting STL templates. I don't recommend strictly enforcing your clients to have specific build settings... they may have another 3rd party tool that expects you to use completely opposite build settings.
(1) Yes I know static and locals are instantiated/deleted on dll load/unload.
I have this exact problem in a project I'm working on - STL classes are transmitted to and from DLLs a lot. The problem isn't just the different memory heaps - it's actually that the STL classes have no binary standard (ABI). For example, in debug builds, some STL implementations add extra debugging information to the STL classes, such that sizeof(std::vector<T>)
(release build) != sizeof(std::vector<T>)
(debug build). Ouch! There's no hope you can rely on binary compatibility of these classes. Besides, if your DLL was compiled in a different compiler with some other STL implementation that used other algorithms, you might have different binary format in release builds, too.
The way I've solved this problem is by using a template class called pod<T>
(POD stands for Plain Old Data, like chars and ints, which usually transfer fine between DLLs). The job of this class is to package its template parameter in to a consistent binary format, and then unpackage it at the other end. For example, instead of a function in a DLL returning a std::vector<int>
, you return a pod<std::vector<int>>
. There's a template specialization for pod<std::vector<T>>
, which mallocs a memory buffer and copies the elements. It also provides operator std::vector<T>()
, so that the return value can transparently be stored back in to a std::vector, by constructing a new vector, copying its stored elements in to it, and returning it. Because it always uses the same binary format, it can be safely compiled in to separate binaries and remain binary compatible. An alternative name for pod
could be make_binary_compatible
.
Here's the pod class definition:
// All members are protected, because the class *must* be specialization
// for each type
template<typename T>
class pod {
protected:
pod();
pod(const T& value);
pod(const pod& copy); // no copy ctor in any pod
pod& operator=(const pod& assign);
T get() const;
operator T() const;
~pod();
};
Here's the partial specialization for pod<vector<T>>
- note, partial specialization is used so this class works for any type of T. Also note, it actually is storing a memory buffer of pod<T>
rather than just T - if the vector contained another STL type like std::string, we'd want that to be binary compatible too!
// Transmit vector as POD buffer
template<typename T>
class pod<std::vector<T> > {
protected:
pod(const pod<std::vector<T> >& copy); // no copy ctor
// For storing vector as plain old data buffer
typename std::vector<T>::size_type size;
pod<T>* elements;
void release()
{
if (elements) {
// Destruct every element, in case contained other cr::pod<T>s
pod<T>* ptr = elements;
pod<T>* end = elements + size;
for ( ; ptr != end; ++ptr)
ptr->~pod<T>();
// Deallocate memory
pod_free(elements);
elements = NULL;
}
}
void set_from(const std::vector<T>& value)
{
// Allocate buffer with room for pods of T
size = value.size();
if (size > 0) {
elements = reinterpret_cast<pod<T>*>(pod_malloc(sizeof(pod<T>) * size));
if (elements == NULL)
throw std::bad_alloc("out of memory");
}
else
elements = NULL;
// Placement new pods in to the buffer
pod<T>* ptr = elements;
pod<T>* end = elements + size;
std::vector<T>::const_iterator iter = value.begin();
for ( ; ptr != end; )
new (ptr++) pod<T>(*iter++);
}
public:
pod() : size(0), elements(NULL) {}
// Construct from vector<T>
pod(const std::vector<T>& value)
{
set_from(value);
}
pod<std::vector<T> >& operator=(const std::vector<T>& value)
{
release();
set_from(value);
return *this;
}
std::vector<T> get() const
{
std::vector<T> result;
result.reserve(size);
// Copy out the pods, using their operator T() to call get()
std::copy(elements, elements + size, std::back_inserter(result));
return result;
}
operator std::vector<T>() const
{
return get();
}
~pod()
{
release();
}
};
Note the memory allocation functions used are pod_malloc and pod_free - these are simply malloc and free, but using the same function between all DLLs. In my case, all DLLs use the malloc and free from the host EXE, so they are all using the same heap, which solves the heap memory issue. (Exactly how you figure this out is down to you.)
Also note you need specializations for pod<T*>
, pod<const T*>
, and pod for all the basic types (pod<int>
, pod<short>
etc), so that they can be stored in a "pod vector" and other pod containers. These should be straightforward enough to write if you understand the above example.
This method does mean copying the entire object. You can, however, pass references to pod types, since there is an operator=
which is safe between binaries. There's no real pass-by-reference, though, since the only way to change a pod type is to copy it out back to its original type, change it, then repackage as a pod. Also, the copies it creates mean it's not necessarily the fastest way, but it works.
However, you can also pod-specialize your own types, which means you can effectively return complex types like std::map<MyClass, std::vector<std::string>>
providing there's a specialization for pod<MyClass>
and partial specializations for std::map<K, V>
, std::vector<T>
and std::basic_string<T>
(which you only need to write once).
The end result usage looks like this. A common interface is defined:
class ICommonInterface {
public:
virtual pod<std::vector<std::string>> GetListOfStrings() const = 0;
};
A DLL might implement it as such:
pod<std::vector<std::string>> MyDllImplementation::GetListOfStrings() const
{
std::vector<std::string> ret;
// ...
// pod can construct itself from its template parameter
// so this works without any mention of pod
return ret;
}
And the caller, a separate binary, can call it as such:
ICommonInterface* pCommonInterface = ...
// pod has an operator T(), so this works again without any mention of pod
std::vector<std::string> list_of_strings = pCommonInterface->GetListOfStrings();
So once it's set up, you can use it almost as if the pod class wasn't there.
I'm not sure about "anything that could call new/delete" - this can be managed by careful use of shared pointer equivalents with appropriate allocators/deleter functions.
However in general, I wouldn't pass templates across DLL boundaries - the implementation of the template class ends up in both sides of the interface which means you can both be using a different implementation. Bad things will happen unless you can always guarantee that your entire set of binaries is all built with the same toolchain.
When I need this sort of functionality I often use a virtual interface class across the boundary. You can then provide wrappers for std::string
, list
etc. that allow you to safely use them via the interface. You can then control allocation etc. using your implementation, or using a shared_ptr
.
Having said all this, the one thing I do use in my DLL interfaces is shared_ptr
, as it's too useful not to. I haven't yet had any problems, but everything is built with the same toolchain. I'm waiting for this to bite me, as no doubt it will. See this previous question: Using shared_ptr in dll-interfaces
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With