I am trying to figure out how C and C++ store large objects on the stack. Usually, the stack is the size of an integer, so I don't understand how larger objects are stored there. Do they simply take up multiple stack "slots"?
In general, large objects should be created on the heap. The stack should generally be used only for small objects relevant to a particular stack context.
3. Which container in c++ will take large objects? Explanation: Because the vector is mainly used to store large objects for the game programming and other operations etc. 4.
Stack overflow The stack has a limited size, and consequently can only hold a limited amount of information. On Windows, the default stack size is 1MB. On some unix machines, it can be as large as 8MB. If the program tries to put too much information on the stack, stack overflow will result.
True, some operating systems do have stack limitations. (Some of those also have nasty heap limitations too!)
But this isn't 1985 any more.
These days, I run Linux!
My default stacksize is limited to 10 MB. My default heapsize is unlimited. It's pretty trivial to unlimit that stacksize. (*cough* [tcsh] unlimit stacksize *cough*. Or setrlimit().)
The biggest differences between stack and heap are:
Under Linux, both stack and heap are manged through virtual memory.
In terms of allocation time, even heap-searching through badly fragmented memory can't hold a candle to mapping in new pages of memory. Time-wise the differences are negligible!
Depending on your OS, oftentimes it's only when you actually use those new memory pages that they are mapped in. (NOT during the malloc() allocation!) (It's a lazy evaluation thing.)
(new would invoke the constructor, which would presumably use those memory pages...)
You can thrash the VM system by creating and destroying large objects on either the stack or the heap. It depends on your OS/compiler whether memory can/is reclaimed by the system. If it's not reclaimed, heap might be able to reuse it. (Assuming it hasn't been repurposed by another malloc() in the meantime.) Similarly, if stack is not reclaimed, it would just be reused.
Though pages that get swapped out would need to be swapped back in, and that's going to be your biggest time-hit.
Of all these things, I worry about memory fragmentation the most!
Lifespan (when it goes out of scope) is always the deciding factor.
But when you run programs for long periods of time, fragmentation creates a gradually increasing memory footprint. The constant swapping eventually kills me!
Something just wasn't adding up here... I figured either *I* was way the hell off base. Or everyone else was. Or, more likely, both. Or, just maybe, neither.
Whatever the answer, I had to know what was going on!
...This is going to be long. Bear with me...
I've spend most of the last 12 years working under Linux. And about 10 years before that under various flavors of Unix. My perspective on computers is somewhat biased. I have been spoiled!
I've done a little with Windows, but not enough to speak authoritatively. Nor, tragically, with Mac OS/Darwin either... Though Mac OS/Darwin/BSD is close enough that some of my knowledge carries over.
With 32-bit pointers, you run out of address space at 4 GB (2^32).
Practically speaking, STACK+HEAP combined is usually limited to somewhere between 2-4 GB as other things need to get mapped in there.
(There's shared memory, shared libraries, memory-mapped files, the executable image your running is always nice, etc.)
Under Linux/Unix/MacOS/Darwin/BSD, you can artificially constrain the HEAP or the STACK to whatever arbitrary values you want at runtime. But ultimately there is a hard system limit.
This is the distinction (in tcsh) of "limit" vs "limit -h". Or (in bash) of "ulimit -Sa" vs "ulimit -Ha". Or, programmatically, of rlim_cur vs rlim_max in struct rlimit.
Now we get to the fun part. With respect to Martin York's Code. (Thank you Martin! Good example. Always good to try things out!.)
Martin's presumably running on a Mac. (A fairly recent one. His compiler build is newer than mine!)
Sure, his code won't run on his Mac by default. But it will run just fine if he first invokes "unlimit stacksize" (tcsh) or "ulimit -Ss unlimited" (bash).
Testing on an ancient (obsolete) Linux RH9 2.4.x kernel box, allocating large amounts of STACK OR HEAP, either one by itself tops out between 2 and 3 GB. (Sadly, the machine's RAM+SWAP tops out at a little under 3.5 GB. It's a 32-bit OS. And this is NOT the only process running. We make do with what we have...)
So there really are no limitations on STACK size vs HEAP size under Linux, other than the artificial ones...
On a Mac, there's a hard stacksize limit of 65532 kilobytes. It has to do with how thing are laid out in memory.
Normally, you think of an idealized system as having STACK at one end of the memory address space, HEAP at the other, and they build towards each other. When they meet, you are out of memory.
Macs appear to stick their Shared system libraries in between at a fixed offset limiting both sides. You can still run Martin York's Code with "unlimit stacksize", since he's only allocating something like 8 MiB (< 64 MiB) of data. But he'll run out of STACK long before he runs out of HEAP.
I'm on Linux. I won't. Sorry kid. Here's a Nickel. Go get yourself a better OS.
There are workarounds for the Mac. But they get ugly and messy and involve tweaking the kernel or linker parameters.
In the long run, unless Apple does something really dumb, 64-bit address spaces will render this whole stack-limitation thing obsolete sometime Real Soon Now.
Anytime you push something onto the STACK it's appended to the end. And it's removed (rolled back) whenever the current block exits.
As a result, there are no holes in the STACK. It's all one big solid block of used memory. With perhaps just a little unused space at the very end all ready for reuse.
In contrast, as HEAP is allocated and free'd, you wind up with unused-memory holes. These can gradually lead to an increased memory footprint over time. Not what we usually mean by a core leak, but the results are similar.
Memory Fragmentation is NOT a reason to avoid HEAP storage. It's just something to be aware of when you're coding.
Then you can wind up with a large number of variables, all utilized within a small localized region of the code, that are scattered across a great many virtual memory pages. (As in you are using 4 bytes on this 2k page, and 8 bytes on that 2k page, and so on for a whole lot of pages...)
All of which means that your program needs to have a large number of pages swapped in to run. Or it's going to swapping pages in and out constantly. (We call that thrashing.)
On the other hand, had these small allocations been made on the STACK, they would all be located in a contiguous stretch of memory. Fewer VM memory pages would need to be loaded. (4+8+... < 2k for the win.)
Sidenote: My reason for calling attention to this stems from a certain electrical engineer I knew who insisted that all arrays be allocated on the HEAP. We were doing matrix math for graphics. A *LOT* of 3 or 4 element arrays. Managing new/delete alone was a nightmare. Even abstracted away in classes it caused grief!
Yes, threads are limited to very tiny stacks by default.
You can change that with pthread_attr_setstacksize(). Though depending on your threading implementation, if multiple threads are sharing the same 32 bit address space, large individual per-thread stacks will be a problem! There just ain't that much room! Again, transitioning to 64-bit address spaces (OS's) will help.
pthread_t threadData; pthread_attr_t threadAttributes; pthread_attr_init( & threadAttributes ); ASSERT_IS( 0, pthread_attr_setdetachstate( & threadAttributes, PTHREAD_CREATE_DETACHED ) ); ASSERT_IS( 0, pthread_attr_setstacksize ( & threadAttributes, 128 * 1024 * 1024 ) ); ASSERT_IS( 0, pthread_create ( & threadData, & threadAttributes, & runthread, NULL ) );
Perhaps you and I are thinking of different things?
When I think of a stack frame, I think of a call stack. Each function or method has its own stack frame consisting of the return address, arguments, and local data.
I've never seen any limitations on the size of a stack frame. There are limitations on the STACK as a whole, but that's all the stack frames combined.
There's a nice diagram and discussion of stack frames over on Wiki.
Under Linux/Unix/MacOS/Darwin/BSD, it is possible to change the maximum STACK size limitations programmatically as well as limit(tcsh) or ulimit(bash):
struct rlimit limits; limits.rlim_cur = RLIM_INFINITY; limits.rlim_max = RLIM_INFINITY; ASSERT_IS( 0, setrlimit( RLIMIT_STACK, & limits ) );
Just don't try to set it to INFINITY on a Mac... And change it before you try to use it. ;-)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With