A memory pool is a logical division of main memory or storage that is reserved for processing a job or group of jobs. On your system, all main storage can be divided into logical allocations called memory pools. By default, the system manages the transfer of data and programs into memory pools.
While memory pools can be used as data buffers within a thread, CMSIS-RTOS also implements a mail queue which is a combination of memory pool and message queue. The Mail queue uses a memory pool to create formatted memory blocks and passes pointers to these blocks in a message queue.
Allocation Pool means the aggregate Net Settlement Funds of each Settlement Agreement for which there has been a Final Judgment.
Any kind of "pool" is really just resources you've acquired/initialized in advance so that they're already ready to go, not allocated on the fly with each client request. When clients finish using them, the resource returns to the pool instead of being destroyed.
Memory pools are basically just memory you've allocated in advance (and typically in big blocks). For example, you might allocate 4 kilobytes of memory in advance. When a client requests 64 bytes of memory, you just hand them a pointer to an unused space in that memory pool for them to read and write whatever they want. When the client is done, you can just mark that section of memory as being unused again.
As a basic example which doesn't bother with alignment, safety, or returning unused (freed) memory back to the pool:
class MemoryPool
{
public:
MemoryPool(): ptr(mem)
{
}
void* allocate(int mem_size)
{
assert((ptr + mem_size) <= (mem + sizeof mem) && "Pool exhausted!");
void* mem = ptr;
ptr += mem_size;
return mem;
}
private:
MemoryPool(const MemoryPool&);
MemoryPool& operator=(const MemoryPool&);
char mem[4096];
char* ptr;
};
...
{
MemoryPool pool;
// Allocate an instance of `Foo` into a chunk returned by the memory pool.
Foo* foo = new(pool.allocate(sizeof(Foo))) Foo;
...
// Invoke the dtor manually since we used placement new.
foo->~Foo();
}
This is effectively just pooling memory from the stack. A more advanced implementation might chain blocks together and do some branching to see if a block is full to avoid running out of memory, deal with fixed-size chunks that are unions (list nodes when free, memory for the client when used), and it definitely needs to deal with alignment (easiest way is just max align the memory blocks and add padding to each chunk to align the subsequent one).
More fancy would be buddy allocators, slabs, ones applying fitting algorithms, etc. Implementing an allocator is not so different from a data structure, but you get knee deep in raw bits and bytes, have to think about things like alignment, and can't shuffle contents around (can't invalidate existing pointers to memory being used). Like data structures, there isn't really a golden standard that says, "thou shalt do this". There's a wide variety of them, each with their own strengths and weaknesses, but there are some especially popular algorithms for memory allocation.
Implementing allocators is something that I would actually recommend to many C and C++ developers just to kind of get in tune with the way that memory management works a bit better. It can make you a bit more conscious of how the memory being requested connects to data structures using them, and also opens up a whole new door of optimization opportunities without using any new data structures. It can also make data structures like linked lists which are normally not very efficient much more useful and reduce temptations to make opaque/abstract types less opaque to avoid the heap overhead. However, there can be an initial excitement which might want to make you shoehorn custom allocators for everything, only to later regret the additional burden (especially if, in your excitement, you forget about issues like thread safety and alignment). It's worth taking it easy there. As with any micro-optimization, it's generally best applied discretely, in hindsight, and with a profiler in hand.
The basic concept of a memory pool is to allocate a large portion of memory for your application, and, later on, instead of using plain new
to request memory from the O/S, you return a chunk of the previously allocated memory instead.
In order to make this work, you need to manage memory usage yourself and cannot rely on the O/S; i.e., you'll need to implement your own versions of new
and delete
, and use the original versions only when allocating, freeing, or potentially resizing your own memory pool.
The first approach would be to define one's own Class that encapsules a memory pool and provides custom methods that implement the semantics of new
and delete
, but take memory from the pre-allocated pool. Remember, this pool is nothing more than an area of memory that had been allocated using new
and has an arbitrary size. The pool's version of new
/delete
return resp. take pointers. The simplest version would probably look like C code:
void *MyPool::malloc(const size_t &size)
void MyPool::free(void *ptr)
You can pepper this with templates to automatically add conversion, e.g.
template <typename T>
T *MyClass::malloc();
template <typename T>
void MyClass::free(T *ptr);
Notice that, thanks to the template arguments, the size_t size
argument can be omitted since the compiler allows you to call sizeof(T)
in malloc()
.
Returning a simple pointer means that your pool can only grow when there's adjacent memory available, and only shrink if the pool memory at its "borders" is not taken. More specifically, you cannot relocate the pool because that would invalidate all pointers your malloc function returned.
A way to fix this limitation is to return pointers to pointers, i.e., return T**
instead of simply T*
. That allows you to change the underlying pointer while the user-facing part remains the same. Incidentially, that has been done for the NeXT O/S, where it was called a "handle". To access the handle's contents, one had to call (*handle)->method()
, or (**handle).method()
. Eventually, Maf Vosburg invented a pseudo-operator that exploited operator precedence to get rid of the (*handle)->method()
syntax: handle[0]->method();
It was called the sprong operator.
The benefits of this operation are: First, you avoid the overhead of a typical call to new
and delete
, and second, your memory pool ensures that a contiguous segment of memory is used by your application, i.e., it avoids memory fragmentation and therefore increases CPU cache hits.
So, basically, a memory pool provides you with a speedup you gain with the downside of a potentially more complex application code. But then again, there are some implementations of memory pools that are proven and can simply be used, such as boost::pool.
Basically, memory pools allow you to avoid some of the expense of allocating memory in a program that allocates and frees memory frequently. What you do is allocate a big chunk of memory at the beginning of execution, and reuse the same memory for different allocations that do not overlap temporally. You have to have some mechanism for keeping track of what memory is available and use that memory for allocations. When you're done with the memory, instead of freeing it, mark it as available again.
In other words, instead of calls to new
/malloc
and delete
/free
, make a call to your self-defined allocator/deallocator functions.
Doing this allows you to only do one allocation (assuming you know approximately how much memory you'll need in total) in the course of execution. If your program is latency- rather than memory-bound, you can write an allocation function that performs faster than malloc
at the expense of some memory usage.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With