Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Thread class memory allocation oddity on an embedded platform

I am running into a strange issue I've been able to track down somewhat but I still can't see the cause. Maybe someone here can shed some light?

I'm running on a PowerPC processor on top of VxWorks 5.5 developing in C++ with the PPCgnu604 toolchain.

I have a class like so:

class MyClass
{
  public:
    void run( void );
  private:
    CommandMesssageClass command;
    StatusMessageClass status;
};

When my application is started, it will dynamically allocate an instance of MyClass and spawn a thread pointing to its "run" function. Essentially it just sits there polling for commands and, upon receipt, will issue a status back.

Note that this is a chopped down version of the class. There are a number of other methods and variables left out for brevity.

The issue I see is when both the command and status messages are defined as private class members I will get a change in the available bytes in memory despite the fact there should be no dynamic memory allocation. This is important because this is ocurring in what needs to be a deterministic and rate-safe procedure.

If I move one or both of the message declarations into the run function, it works fine with no additional allocation!

I must be missing something fundamental in my understanding of C++ declarations and memory allocation. My understanding is that a class instance that I dynamically instansiate will be fully allocated on the heap (including all member variables) when it's created. The difference I see here would be that moving the message declarations to the run function puts them on the stack instead. The heap in this case is more than large enough to accompadate the entire size of the class. Why does it seem not to be allocating enough memory until specific portions are used?

The message classes do no dynamic allocation of their own. (And if they did, I would expect moving the declaration would not change the behavior in this case and I would still see a change in the size of the heap.)

To monitor the memory allocation I'm using the following VxWorks memLib (or memPartLib) call:

memPartInfoGet( memSysPartId, &partitionStatus );
...
bytesFree = partitionStatus.numBytesFree;

Edit:

To clarify, the MyClass object is instansiated and initialized in an initialization routine and then the code enters rate-safe processing. During this time, upon the receipt of a command message over a serial line (the first interaction with the Command or Status message objects) additional memory is allocated (or rather the number of bytes free decreases). This is bad because dynamic memory allocation is not deterministic.

I've been able to get rid of the problem by moving the class variables as I've described.

like image 239
Anthony Avatar asked Aug 02 '10 22:08

Anthony


1 Answers

I must be missing something fundamental in my understanding of C++ declarations and memory allocation.

I don't think so. Everything you say that you expect above is correct -- game programmers rely heavily on this behavior all the time. :-)

Why does it seem not to be allocating enough memory until specific portions are used?

You've left out the guts of the class for brevity. I've had some experience debugging similar issues, and my best guess is that somewhere in there a library function is, in fact, making a runtime allocation that you don't know about.

In other words, the runtime allocation is there in both cases, but the two different sizes of MyClass mean that the malloc pools are filled differently. You could prove this by moving the objects to the stack inside run(), but padding MyClass out to the same size. If you still see the free mem drop, then it has nothing to do with whether those objects are on the heap or the stack ... it's a secondary effect that's happening because of the size of MyClass.

Remember, malloc is chunky -- most implementations don't do one-to-one allocations for each call to malloc. Instead it over-allocates and keeps the memory around in a pool, and grows those pools when necessary.

I'm not familiar with your toolchain, but typical suspects for unexpected small allocations on embedded systems include ctype functions (locales), and date/time functions (time zone).

like image 68
Drew Thaler Avatar answered Oct 03 '22 11:10

Drew Thaler