Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C++/compilation : is it possible to set the size of the vptr (global vtable + 2 bytes index)

I posted recently a question about the memory overhead due to virtuality in C++. The answers allow me to understand how vtable and vptr works. My problem is the following : I work on supercomputers, I have billions of some objects and consequently I have to care about the memory overhead due to virtuality. After some measures, when I use classes with virtual functions, each derived object has its 8-byte vptr. This is not negligible at all.

I wonder if intel icpc or g++ have some configuration/option/parameters, to use "global" vtables and indexes with adjustable precision instead of vptr. Because a such thing would allow me to use 2-bytes index (unsigned short int) instead of 8-bytes vptr for billions of objects (and a good reduction of memory overhead). Is there any way to do that (or something like that) with compilation options ?

Thank you very much.

like image 718
Vincent Avatar asked May 12 '12 08:05

Vincent


People also ask

What is the size of VPTR?

If an empty class contain virtual function, even though there is no data members in the class, its size will not be 1 byte but 4 byte because of virtual pointer i.e. VPTR. Virtual pointer size on 32 bit platform is 4 bytes and on 64 bit it is 8 bytes.

What is the difference between vtable and VPTR How does the compiler deal with the two?

# Vtable is created by compiler at compile time. # VPTR is a hidden pointer created by compiler implicitly. # If base class pointer pointing to a function which is not available in base class then it will generate error over there. # Memory of Vtable and VPTR gets allocated inside the process address space.

What is the type of VPTR in C++?

It is simply an array of pointers.

Does every object have a vtable?

All classes with a virtual method will have a single vtable that is shared by all objects of the class. Each object instance will have a pointer to that vtable (that's how the vtable is found), typically called a vptr. The compiler implicitly generates code to initialize the vptr in the constructor.


1 Answers

Unfortunately... not automatically.

But remember than a v-table is nothing but syntactic sugar for runtime polymorphism. If you are willing to re-engineer your code, there are several alternatives.

  1. External polymorphism
  2. Hand-made v-tables
  3. Hand-made polymorphism

1) External polymorphism

The idea is that sometimes you only need polymorphism in a transient fashion. That is, for example:

std::vector<Cat> cats;
std::vector<Dog> dogs;
std::vector<Ostrich> ostriches;

void dosomething(Animal const& a);

It seems wasteful for Cat or Dog to have a virtual pointer embedded in this situation because you know the dynamic type (they are stored by value).

External polymorphism is about having pure concrete types and pure interfaces, as well as a simple bridge in the middle to temporarily (or permanently, but it's not what you want here) adapt a concrete type to an interface.

// Interface
class Animal {
public:
    virtual ~Animal() {}

    virtual size_t age() const = 0;
    virtual size_t weight() const = 0;

    virtual void eat(Food const&) = 0;
    virtual void sleep() = 0;

private:
    Animal(Animal const&) = delete;
    Animal& operator=(Animal const&) = delete;
};

// Concrete class
class Cat {
public:
    size_t age() const;
    size_t weight() const;

    void eat(Food const&);
    void sleep(Duration);
};

The bridge is written once and for all:

template <typename T>
class AnimalT: public Animal {
public:
    AnimalT(T& r): _ref(r) {}

    virtual size_t age() const override { return _ref.age(); }
    virtual size_t weight() const { return _ref.weight(); }

    virtual void eat(Food const& f) override { _ref.eat(f); }
    virtual void sleep(Duration const d) override { _ref.sleep(d); }

private:
    T& _ref;
};

template <typename T>
AnimalT<T> iface_animal(T& r) { return AnimalT<T>(r); }

And you can use it so:

for (auto const& c: cats) { dosomething(iface_animal(c)); }

It incurs an overhead of two pointers per item, but only as long as you need polymorphism.

An alternative is to have AnimalT<T> work with values too (instead of references) and providing a clone method, which allows you to chose fully between having a v-pointer or not depending on the situation.

In this case, I advise using a simple class:

template <typename T> struct ref { ref(T& t): _ref(t); T& _ref; };

template <typename T>
T& deref(T& r) { return r; }

template <typename T>
T& deref(ref<T> const& r) { return r._ref; }

And then modify the bridge a bit:

template <typename T>
class AnimalT: public Animal {
public:
    AnimalT(T r): _r(r) {}

    std::unique_ptr< Animal<T> > clone() const { return { new Animal<T>(_r); } }

    virtual size_t age() const override { return deref(_r).age(); }
    virtual size_t weight() const { return deref(_r).weight(); }

    virtual void eat(Food const& f) override { deref(_r).eat(f); }
    virtual void sleep(Duration const d) override { deref(_r).sleep(d); }

private:
    T _r;
};

template <typename T>
AnimalT<T> iface_animal(T r) { return AnimalT<T>(r); }

template <typename T>
AnimalT<ref<T>> iface_animal_ref(T& r) { return Animal<ref<T>>(r); }

This way you choose when you wanted polymorphic storage and when you do not.


2) Hand-made v-tables

(only easily works on closed hierachies)

It is common in C to emulate object orientation by providing one's own v-table mechanism. Since you appear to know what a v-table is and how the v-pointer works, then you can perfectly work it yourself.

struct FooVTable {
    typedef void (Foo::*DoFunc)(int, int);

    DoFunc _do;
};

And then provide a global array for the hierarchy anchored in Foo:

extern FooVTable const* const FooVTableFoo;
extern FooVTable const* const FooVTableBar;

FooVTable const* const FooVTables[] = { FooVTableFoo, FooVTableBar };

enum class FooVTableIndex: unsigned short {
    Foo,
    Bar
};

Then all you need in your Foo class is to hold onto the most derived type:

class Foo {
public:

    void dofunc(int i, int j) {
        (this->*(table()->_do))(i, j);
    }

protected:
    FooVTable const* table() const { return FooVTables[_vindex]; }

private:
    FooVTableIndex _vindex;
};

The closed hierarchy is there because of the FooVTables array and the FooVTableIndex enumeration which need be aware of all the types of the hierarchy.

The enum index can be bypassed though, and by making the array non constant it is possible to pre-initialize to a larger size and then at init having each derived type registering itself there automatically. Conflicts of indexes are thus detected during this init phase, and it is even possible to have automatic resolution (scanning the array for a free slot).

This may be less convenient, but does provide a way to open the hierarchy. Obviously it's easier to code before any thread is launched, as we are talking global variables here.


3) Hand-made polymorphism

(only really works for closed hierarchies)

The latter is based on my experience exploring the LLVM/Clang codebase. A compiler has the very same problem that you are faced with: for tens or hundreds of thousands of small items a vpointer per item really increases memory consumption, which is annoying.

Therefore, they took a simple approach:

  • each class hierarchy has a companion enum listing all members
  • each class in the hierarchy passes its companion enumerator to its base upon construction
  • virtuality is achieved by switching over the enum and casting appropriately

In code:

enum class FooType { Foo, Bar, Bor };

class Foo {
public:
    int dodispatcher() {
        switch(_type) {
        case FooType::Foo:
            return static_cast<Foo&>(*this).dosomething();

        case FooType::Bar:
            return static_cast<Bar&>(*this).dosomething();

        case FooType::Bor:
            return static_cast<Bor&>(*this).dosomething();
        }
        assert(0 && "Should never get there");
    }
private:
    FooType _type;
};

The switches are pretty annoying, but they can be more or less automated playing with some macros and type list. LLVM typically use a file like:

 // FooList.inc
 ACT_ON(Foo)
 ACT_ON(Bar)
 ACT_ON(Bor)

and then you do:

 void Foo::dodispatcher() {
     switch(_type) {
 #   define ACT_ON(X) case FooType::X: return static_cast<X&>(*this).dosomething();

 #   include "FooList.inc"

 #   undef ACT_ON
     }

     assert(0 && "Should never get there");
 }

Chris Lattner commented that due to how switches are generated (using a table of code offsets) this produced code similar to that of a virtual dispatch, and thus had about the same amount of CPU overhead, but for a lower memory overhead.

Obviously, the one drawback is that Foo.cpp need to include all of the headers of its derived classes. Which effectively seals the hierarchy.


I voluntarily presented the solutions from the most open one to the most closed one. They have various degrees of complexity/flexibility, and it is up to you to choose which one suits you best.

One important thing, in the latter two cases destruction and copies require special care.

like image 169
Matthieu M. Avatar answered Oct 03 '22 00:10

Matthieu M.