Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Seeking suggestions for data representation of a probability distribution

I'm looking for an elegant and efficient way to represent and store an arbitrary probability distribution constructed by explicit sampling.

The distribution is expected to have the following properties:

  • Samples are floating point values, but in principle can be thought to have resolution down to .001
  • Samples are drawn from an interval [-4000; 4000]
  • However, for any two samples a, b, |a - b| < 40
  • 90% of the time, it will have a sharp peak or several sharp peaks close to each other
  • 10% of the time, it will have a peak with an uneven plateau of width 0.5 to 5.

The usual representation -- a histogram array -- is undesirable mainly because of the trade-off between quantization/resolution and space. I imagine there must be a method of representation that adaptively varies the bin size depending on local "complexity".

Space is of concern, because a higher-level grid-like data structure will contain thousands of cells, each containing at least one such probability representation. Easy serialization for disk or network transfer is desirable, but efficiency is not a priority.

Any help would be appreciated.

like image 539
George Skoptsov Avatar asked Mar 20 '12 15:03

George Skoptsov


2 Answers

Interesting problem. Here is a suggestion, which may be quite difficult do implement depending on how much mathematically inclined you are.

Note that I trade space for speed, since what I suggest is likely to be quite heavy computationally (but this is to be tested against real data).

First, use a functional approach. A probability distribution is a probability measure:

struct Distribution
{
    virtual ~Distribution() {};
    virtual double integrate(std::function<double(double)>) = 0;
};

This way, you abstract away from the samples that you produce, since you do not want to store them. Convince yourself that you can do pretty much anything with an "integrate" method.

Of course, with the explicit samples, you do something like

struct SampledDistribution
{
    double integrate(std::function<double(double)> f)
    {
        double acc = 0;
        for (double x: samples) acc += f(samples);
        return acc / samples.size();
    }

    std::deque<double> samples;
};

Now, the storage part:

The usual representation -- a histogram array -- is undesirable mainly because of the trade-off between quantization/resolution and space. I imagine there must be a method of representation that adaptively varies the bin size depending on local "complexity".

The traditional method is wavelets. You produce coefficients with calls to integrate, which you can serialize. You throw coefficients away if the variance of the integral estimator they produce is high.

Then, to deserialize, you produce a Distribution object whose integrate method performs the integration against the wavelets. Integration can be performed with your favorite quadrature method. I stay deliberately vague here because the actual implementation depends on the wavelet family you choose (smooth, compactly supported, orthogonal or not, etc). You will need to dive into the litterature for details anyway.

The point here is that you usually need only very few wavelets to represent a smooth function with few features (say a few peaks, and regularly shaped otherwise), unlike more "regular" finite elements (histograms are a particular kind of finite elements representation). The wavelet representation adapts itself to the features of the transformee, whatever their location or size. You can moreover decide how many coefficients to keep, and hence control the compression ratio.

Also, the 0.001 figure is quite high a number: I suspect you'll need only a handful of coefficients

The tradeoff lies in which wavelet class you use: very smooth distributions will likely be well represented with smooth wavelets, but compactly supported wavelets may be easier to integrate against, etc. Experiment. Note that you don't need "wavelet transform" packages here: only the explicit representation of wavelet functions, and quadrature routines (try Gauss-XXX procedures for reconstruction, or something high order).

I'd favor wavelets which are defined in the Fourier domain (like Lemarie wavelets), since their value and derivatives at zero in Fourier space is known, this allows you to enforce the contraints on the distributions: probability measures must integrate to one, and you may happen to know the expected value or variance beforehand.

Also, you may want to change variables to only deal with functions eg. on [0,1]. There is a copious litterature on wavelets on the interval.

like image 142
Alexandre C. Avatar answered Sep 21 '22 14:09

Alexandre C.


How about this:

I assume we start with data equivalent to a histogram, i.e. a list of intervals with the number of samples associated to it.

Now lets build an approximation for it by starting with a trivial histogram with one bucket containing all the samples.

For all buckets in the approximation consider splitting it in the middle int two buckets, but only actually split the bucket wich will yield the best improvement in the approximation.

repeat until desired approximation is reached or maximum of acceptable buckets is obtained.

So what you need is a way to identify the best bucket to split. I'd say the maximum deviation of the new buckets compared to half the original bucket might work.

You also need some criterium when to stop.

I think this is actually quite similar to an Octree in 1D. You might look there for efficient implementations.

For actually storing the approximation you could, just stor one array with the bucket boundaries and anotherone with the content of each bucket. Or one array of twice the size with alternating bucketboundary and content value.

like image 35
Jens Schauder Avatar answered Sep 22 '22 14:09

Jens Schauder