Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pointer to class data member "::*"

It's a "pointer to member" - the following code illustrates its use:

#include <iostream>
using namespace std;

class Car
{
    public:
    int speed;
};

int main()
{
    int Car::*pSpeed = &Car::speed;

    Car c1;
    c1.speed = 1;       // direct access
    cout << "speed is " << c1.speed << endl;
    c1.*pSpeed = 2;     // access via pointer to member
    cout << "speed is " << c1.speed << endl;
    return 0;
}

As to why you would want to do that, well it gives you another level of indirection that can solve some tricky problems. But to be honest, I've never had to use them in my own code.

Edit: I can't think off-hand of a convincing use for pointers to member data. Pointer to member functions can be used in pluggable architectures, but once again producing an example in a small space defeats me. The following is my best (untested) try - an Apply function that would do some pre &post processing before applying a user-selected member function to an object:

void Apply( SomeClass * c, void (SomeClass::*func)() ) {
    // do hefty pre-call processing
    (c->*func)();  // call user specified function
    // do hefty post-call processing
}

The parentheses around c->*func are necessary because the ->* operator has lower precedence than the function call operator.


This is the simplest example I can think of that conveys the rare cases where this feature is pertinent:

#include <iostream>

class bowl {
public:
    int apples;
    int oranges;
};

int count_fruit(bowl * begin, bowl * end, int bowl::*fruit)
{
    int count = 0;
    for (bowl * iterator = begin; iterator != end; ++ iterator)
        count += iterator->*fruit;
    return count;
}

int main()
{
    bowl bowls[2] = {
        { 1, 2 },
        { 3, 5 }
    };
    std::cout << "I have " << count_fruit(bowls, bowls + 2, & bowl::apples) << " apples\n";
    std::cout << "I have " << count_fruit(bowls, bowls + 2, & bowl::oranges) << " oranges\n";
    return 0;
}

The thing to note here is the pointer passed in to count_fruit. This saves you having to write separate count_apples and count_oranges functions.


Another application are intrusive lists. The element type can tell the list what its next/prev pointers are. So the list does not use hard-coded names but can still use existing pointers:

// say this is some existing structure. And we want to use
// a list. We can tell it that the next pointer
// is apple::next.
struct apple {
    int data;
    apple * next;
};

// simple example of a minimal intrusive list. Could specify the
// member pointer as template argument too, if we wanted:
// template<typename E, E *E::*next_ptr>
template<typename E>
struct List {
    List(E *E::*next_ptr):head(0), next_ptr(next_ptr) { }

    void add(E &e) {
        // access its next pointer by the member pointer
        e.*next_ptr = head;
        head = &e;
    }

    E * head;
    E *E::*next_ptr;
};

int main() {
    List<apple> lst(&apple::next);

    apple a;
    lst.add(a);
}

Here's a real-world example I am working on right now, from signal processing / control systems:

Suppose you have some structure that represents the data you are collecting:

struct Sample {
    time_t time;
    double value1;
    double value2;
    double value3;
};

Now suppose that you stuff them into a vector:

std::vector<Sample> samples;
... fill the vector ...

Now suppose that you want to calculate some function (say the mean) of one of the variables over a range of samples, and you want to factor this mean calculation into a function. The pointer-to-member makes it easy:

double Mean(std::vector<Sample>::const_iterator begin, 
    std::vector<Sample>::const_iterator end,
    double Sample::* var)
{
    float mean = 0;
    int samples = 0;
    for(; begin != end; begin++) {
        const Sample& s = *begin;
        mean += s.*var;
        samples++;
    }
    mean /= samples;
    return mean;
}

...
double mean = Mean(samples.begin(), samples.end(), &Sample::value2);

Note Edited 2016/08/05 for a more concise template-function approach

And, of course, you can template it to compute a mean for any forward-iterator and any value type that supports addition with itself and division by size_t:

template<typename Titer, typename S>
S mean(Titer begin, const Titer& end, S std::iterator_traits<Titer>::value_type::* var) {
    using T = typename std::iterator_traits<Titer>::value_type;
    S sum = 0;
    size_t samples = 0;
    for( ; begin != end ; ++begin ) {
        const T& s = *begin;
        sum += s.*var;
        samples++;
    }
    return sum / samples;
}

struct Sample {
    double x;
}

std::vector<Sample> samples { {1.0}, {2.0}, {3.0} };
double m = mean(samples.begin(), samples.end(), &Sample::x);

EDIT - The above code has performance implications

You should note, as I soon discovered, that the code above has some serious performance implications. The summary is that if you're calculating a summary statistic on a time series, or calculating an FFT etc, then you should store the values for each variable contiguously in memory. Otherwise, iterating over the series will cause a cache miss for every value retrieved.

Consider the performance of this code:

struct Sample {
  float w, x, y, z;
};

std::vector<Sample> series = ...;

float sum = 0;
int samples = 0;
for(auto it = series.begin(); it != series.end(); it++) {
  sum += *it.x;
  samples++;
}
float mean = sum / samples;

On many architectures, one instance of Sample will fill a cache line. So on each iteration of the loop, one sample will be pulled from memory into the cache. 4 bytes from the cache line will be used and the rest thrown away, and the next iteration will result in another cache miss, memory access and so on.

Much better to do this:

struct Samples {
  std::vector<float> w, x, y, z;
};

Samples series = ...;

float sum = 0;
float samples = 0;
for(auto it = series.x.begin(); it != series.x.end(); it++) {
  sum += *it;
  samples++;
}
float mean = sum / samples;

Now when the first x value is loaded from memory, the next three will also be loaded into the cache (supposing suitable alignment), meaning you don't need any values loaded for the next three iterations.

The above algorithm can be improved somewhat further through the use of SIMD instructions on eg SSE2 architectures. However, these work much better if the values are all contiguous in memory and you can use a single instruction to load four samples together (more in later SSE versions).

YMMV - design your data structures to suit your algorithm.