Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Segfault using std::shared_ptr during destruction likely due to too many function calls on the stack

Tags:

c++

c++11

The following code compiles and runs fine:

#include <memory>
struct MyTree {
    std::shared_ptr <MyTree> left;
    std::shared_ptr <MyTree> right;
    int val;
    MyTree(
        std::shared_ptr <MyTree> left_,
        std::shared_ptr <MyTree> right_,
        int val_
    ) : left(left_), right(right_), val(val_) {};
};
int main() {
    std::shared_ptr <MyTree> t(
        new MyTree( std::shared_ptr <MyTree>(),
                    std::shared_ptr <MyTree>(),
                    0)
    );  
    for(int i=0;i<10000;i++) {
        t.reset(new MyTree(t,t,0));
    }
}

However, when the for loop is changed from 10000 to 100000, I receive a segfault. Looking at the result in gdb, it looks like the destructors being called as a result of the garbage collection in std::shared_ptr create a backtrace that's thousands deep. As such, I think the segfault is due to running out of room on the stack from the function calls. I've two questions. First, is this a correct assessment of the segfault? Second, if so, is there a good way to manage custom data structures such as trees that need to be garbage collected, but may be extremely large. Thanks.

like image 402
wyer33 Avatar asked Dec 27 '12 15:12

wyer33


2 Answers

This isn't usually a problem, because normally you keep a tree balanced and the depth is O(lg N).

You've instead got a weird singly-linked list, with a duplicate copy of every pointer. That's... odd.

A real singly-linked list would be very deep recursion, but might benefit from tail call optimization and not exhaust the stack.

The problem you're having is really quite unique to your mixing of the two data structures. Which has no benefits that I can see.

like image 107
Ben Voigt Avatar answered Nov 07 '22 20:11

Ben Voigt


Your assessment looks totally correct to me. It looks like the recursive calls to delete child subtrees are exceeding yoru stack size. This is unrelated to shared_ptr though as I would expect any recursive algorithms on the data structure to also fail in the same way.

If possible on your platform, the simplest way to deal with the need for large structures like that is simply to increase the size of your stack (for example ulimit) to allow the natural recursive algorithm to function.

If that's not possible you're going to have to traverse the nodes yourself, storing the result of the traversal into a container of some sort so you can chop of subnodes and not require a full depth traversal of the tree structure.

like image 29
Mark B Avatar answered Nov 07 '22 21:11

Mark B