I'm using boost::interprocess::vector to share some strings between processes, and I want to make sure I do not overflow the shared memory segment it lives in.
How do I find how much space the vector takes in memory, and how much memory a special segment-allocated string will take?
typedef boost::interprocess::managed_shared_memory::segment_manager SegmentManager;
typedef boost::interprocess::allocator<char, SegmentManager> CharAllocator;
typedef boost::interprocess::basic_string<char, std::char_traits<char>, CharAllocator> ShmString;
typedef boost::interprocess::allocator<ShmString, SegmentManager> StringAllocator;
typedef boost::interprocess::vector<ShmString, StringAllocator> ShmStringVector;
const size_t SEGMENT_SIZE = ...;
addToSharedVector(std::string localString){
using namespace boost::interprocess;
managed_shared_memory segment(open_only, kSharedMemorySegmentName);
ShmStringVector *shmvector = segment.find<ShmStringVector>(kSharedMemoryVectorName).first;
size_t currentVectorSizeInShm = ?????(shmvector); <-------- HALP!
size_t sizeOfNewStringInSharedMemory = ?????(localString); <--------
//shared mutex not shown for clarity
if (currentVectorSizeInShm + sizeOfNewStringInSharedMemory < SEGMENT_SIZE) {
CharAllocator charAllocator(segment.get_segment_manager());
ShmString shmString(charAllocator);
shmFunctionName = localString.c_str();
shmvector->push_back(shmString);
}
}
Quick and dirty
You can make the shared memory a physically mapped file and see how many pages have actually been committed to disk. This gives you a rough indication on many implementations as pages are most likely committed 1 at at time, and usual memory pages sizes are 4kb.
I have another answer[1] that shows you the basics of this method.
You can use the get_free_memory() on the segment manager. Note that this doesn't say what's allocated /just/ for that vector, but it gives you an (arguably more useful) idea of how much space is actually occupied.
In another answer [2] I have used that to benchmark differences in memory overhead between data containers with contiguous storage vs. node-based containers.
As you can see, individual allocations have high overhead, and reallocation leads to fragmentation really quickly. So it's worth looking at
[1]see Memory Mapped Files, Managed Mapped File and Offset Pointer
[2] see Bad alloc is thrown
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With