Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How approximation search works

[Prologue]

This Q&A is meant to explain more clearly the inner working of my approximations search class which I first published here

  • Increasing accuracy of solution of transcendental equation

I was requested for more detailed info about this few times already (for various reasons) so I decided to write Q&A style topic about this which I can easily reference in the future and do not need to explain it over and over again.

[Question]

How to approximate values/parameters in Real domain (double) to achieve fitting of polynomials,parametric functions or solve (difficult) equations (like transcendental) ?

Restrictions

  • real domain (double precision)
  • C++ language
  • configurable precision of approximation
  • known interval for search
  • fitted value/parameter is not strictly monotonic or not function at all
like image 358
Spektre Avatar asked Mar 22 '16 19:03

Spektre


People also ask

What is the process of approximation?

In mathematics, making an approximation is the act or process of finding a number acceptably close to an exact value; that number is then called an approximation or approximate value.

How approximation algorithm is selected?

If an algorithm reaches an approximation ratio of P(n), then we call it a P(n)-approximation algorithm. For a maximization problem, 0< C < C*, and the ratio of C*/C gives the factor by which the cost of an optimal solution is larger than the cost of the approximate algorithm.

What are approximation algorithms used for?

Approximation algorithms are typically used when finding an optimal solution is intractable, but can also be used in some situations where a near-optimal solution can be found quickly and an exact solution is not needed. Many problems that are NP-hard are also non-approximable assuming P≠NP.

What is the formula of approximation?

The linear approximation formula is based on the equation of the tangent line of a function at a fixed point. The linear approximation of a function f(x) at a fixed value x = a is given by L(x) = f(a) + f '(a) (x - a).


2 Answers

Approximation search

This is analogy to binary search but without its restrictions that searched function/value/parameter must be strictly monotonic function while sharing the O(log(n)) complexity.

For example Let assume following problem

We have known function y=f(x) and want to find x0 such that y0=f(x0). This can be basically done by inverse function to f but there are many functions that we do not know how to compute inverse to it. So how to compute this in such case?

knowns

  • y=f(x) - input function
  • y0 - wanted point y value
  • a0,a1 - solution x interval range

Unknowns

  • x0 - wanted point x value must be in range x0=<a0,a1>

Algorithm

  1. probe some points x(i)=<a0,a1> evenly dispersed along the range with some step da

    So for example x(i)=a0+i*da where i={ 0,1,2,3... }

  2. for each x(i) compute the distance/error ee of the y=f(x(i))

    This can be computed for example like this: ee=fabs(f(x(i))-y0) but any other metrics can be used too.

  3. remember point aa=x(i) with minimal distance/error ee

  4. stop when x(i)>a1

  5. recursively increase accuracy

    so first restrict the range to search only around found solution for example:

    a0'=aa-da;
    a1'=aa+da;
    

    then increase precision of search by lowering search step:

    da'=0.1*da;
    

    if da' is not too small or if max recursions count is not reached then go to #1

  6. found solution is in aa

This is what I have in mind:

img1

On the left side is the initial search illustrated (bullets #1,#2,#3,#4). On the right side next recursive search (bullet #5). This will recursively loop until desired accuracy is reached (number of recursions). Each recursion increase the accuracy 10 times (0.1*da). The gray vertical lines represent probed x(i) points.

Here the C++ source code for this:

//---------------------------------------------------------------------------
//--- approx ver: 1.01 ------------------------------------------------------
//---------------------------------------------------------------------------
#ifndef _approx_h
#define _approx_h
#include <math.h>
//---------------------------------------------------------------------------
class approx
    {
public:
    double a,aa,a0,a1,da,*e,e0;
    int i,n;
    bool done,stop;

    approx()            { a=0.0; aa=0.0; a0=0.0; a1=1.0; da=0.1; e=NULL; e0=NULL; i=0; n=5; done=true; }
    approx(approx& a)   { *this=a; }
    ~approx()           {}
    approx* operator = (const approx *a) { *this=*a; return this; }
    //approx* operator = (const approx &a) { ...copy... return this; }

    void init(double _a0,double _a1,double _da,int _n,double *_e)
        {
        if (_a0<=_a1) { a0=_a0; a1=_a1; }
        else          { a0=_a1; a1=_a0; }
        da=fabs(_da);
        n =_n ;
        e =_e ;
        e0=-1.0;
        i=0; a=a0; aa=a0;
        done=false; stop=false;
        }
    void step()
        {
        if ((e0<0.0)||(e0>*e)) { e0=*e; aa=a; }         // better solution
        if (stop)                                       // increase accuracy
            {
            i++; if (i>=n) { done=true; a=aa; return; } // final solution
            a0=aa-fabs(da);
            a1=aa+fabs(da);
            a=a0; da*=0.1;
            a0+=da; a1-=da;
            stop=false;
            }
        else{
            a+=da; if (a>a1) { a=a1; stop=true; }       // next point
            }
        }
    };
//---------------------------------------------------------------------------
#endif
//---------------------------------------------------------------------------

This is how to use it:

approx aa;
double ee,x,y,x0,y0=here_your_known_value;
//            a0,  a1, da,n, ee  
for (aa.init(0.0,10.0,0.1,6,&ee); !aa.done; aa.step())
    {
    x = aa.a;        // this is x(i)
    y = f(x)         // here compute the y value for whatever you want to fit
    ee = fabs(y-y0); // compute error of solution for the approximation search
    }

in the rem above for (aa.init(... are the operand named. The a0,a1 is the interval on which the x(i) is probed, da is initial step between x(i) and n is the number of recursions. so if n=6 and da=0.1 the final max error of x fit will be ~0.1/10^6=0.0000001. The &ee is pointer to variable where the actual error will be computed. I choose pointer so there are not collisions when nesting this and also for speed as passing parameter to heavily used function creates heap trashing.

[notes]

This approximation search can be nested to any dimensionality (but of coarse you need to be careful about the speed) see some examples

  • Approximation of n points to the curve with the best fit
  • Curve fitting with y points on repeated x positions (Galaxy Spiral arms)
  • Increasing accuracy of solution of transcendental equation
  • Find Minimum area ellipse enclosing a set of points in c++
  • 2D TDoA Time Difference of Arrival
  • 3D TDoA Time Difference of Arrival

In case of non-function fit and the need of getting "all" the solutions you can use recursive subdivision of search interval after solution found to check for another solution. See example:

  • Given an X co-ordinate, how do I calculate the Y co-ordinate for a point so that it rests on a Bezier Curve

What you should be aware of?

you have to carefully choose the search interval <a0,a1> so it contains the solution but is not too wide (or it would be slow). Also initial step da is very important if it is too big you can miss local min/max solutions or if too small the thing will got too slow (especially for nested multidimensional fits).

like image 146
Spektre Avatar answered Sep 19 '22 11:09

Spektre


a combination of secant (with bracketing, but see correction at the bottom) and bisection method is much better:

enter image description here

we find root approximations by secants, and keep the root bracketed as in bisection.

enter image description here

always keep the two edges of the interval so that the delta at one edge is negative, and at the other it is positive, so the root is guaranteed to be inside; and instead of halving, use the secant method.

Pseudocode:

given a function f
given two points a, b, such that a < b and sign(f(a)) /= sign(f(b))
given tolerance tol
find root z of f such that abs(f(z)) < tol     -- stop_condition

DO:
    x = root of f by linear interpolation of f between a and b
    m = midpoint between a and b

    if stop_condition holds at x or m, set z and STOP

    [a,b] := [a,x,m,b].sort.choose_shortest_interval_with_
                                   _opposite_signs_at_its_ends

This obviously halves the interval [a,b], or does even better, at each iteration; so unless the function is extremely bad behaving (like, say, sin(1/x) near x=0), this will converge very quickly, taking only two evaluations of f at the most, for each iteration step.

And we can detect the bad behaving cases by checking that b-a not becomes too small (esp. if we're working with finite precision, as in doubles).

update: apparently this is actually double false position method, which is secant with bracketing, as described by the pseudocode above. Augmenting it by the middle point as in bisection ensures convergence even in the most pathological cases.

like image 38
Will Ness Avatar answered Sep 18 '22 11:09

Will Ness