Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Algorithm to optimize multiple variables more efficiently than trial-and-error

Google's results on this seem to require more advanced maths than I'm familiar with (and I might not be smarter than a fifth grader, but I don't intend to find out).

I'm looking for a general way to solve multivariate optimization problems, preferably in c#, without having to dig into matrices and eigenvectors and normal distributions.

Say I have numeric variables x, y, z, and w, and function f such that w = f(x, y, z). I want to maximize w, and...

  • f is unknown
  • Codependency between x, y and/or z, if any, is unknown
  • In some cases I only have post-hoc data sets
  • In other cases I can vary x, y and z and resample w on-demand
  • In the a-priori cases, the ideal algorithm maximizes w with the fewest trial permutations of x, y, and z, and chooses the next value for each after every round of sampling

I have rough minimum and maximum bounds for the independent variables. I of course don't want to sample any more of the permutation space than necessary. I'd like the algorithm to have at least a crude ability to detect the most glaring co-dependencies, e.g., diminishing returns when x > 2y, or actual deterioration in w when the sum of x, y, and z exceeds some ceiling, etc.

Most of the math libraries I've looked at assume I know how to perform quantum nergenflip projections over the Boigenfoodle Continuum, and I'm just not there. How would a non-mathematician coder accomplish this?

like image 444
Paul Smith Avatar asked May 16 '12 22:05

Paul Smith


People also ask

In which optimization can be used for multiple variables?

In a multivariate optimization problem, there are multiple variables that act as decision variables in the optimization problem.

What types of algorithms are most suitable for optimization?

Gradient Descent is the most basic but most used optimization algorithm. It's used heavily in linear regression and classification algorithms. Backpropagation in neural networks also uses a gradient descent algorithm.

How do you choose the best optimization algorithm?

Try four to five algorithms based on single and multi objective and compare their results to find the best one or the one that is better than others in some perspectives. Think about the problem, you would like to solve. Then, make a model, with appropriate objective function(s) and constraints.

What is multivariable optimization?

Multivariable optimization: basic concepts. and properties. • Absolute maximum/absolute minimum (also called global max/min): Specify a region R contained in the domain of the function f. If the value at (a, b) is bigger than or equal to the value at any other point in R, then f(a, b) is called the global maximum.


1 Answers

You can try simulated annealing if you don't want to get stuck in the local minima.

Basically, start with some x,y,z. Generate dx, dy and dz randomly from a zero mean distribution (normal or gaussian distribution). If w(x+dx,y+dy,z+dz) > w(x,y,z) then select this new solution. Otherwise select it probability w(x+dx,y+dy,z+dz)/w(x,y,z).

Python code

def simAnneal( w, seed_x, numSteps=100000, sigma=0.01 ):
    optimal_x = [i for i in seed_x]
    optimal_w = w(optimal_x)

    cur_w = w(seed_x)

    for i in range(numSteps):
        new_x = [i+random.gauss(0, sigma) for i in seed_x]
        new_w = w(new_x)

        if (new_w > cur_w) or (random.random() > new_w / cur_w) :
            cur_x = new_x
            cur_w = new_w
            if cur_w > optimal_w:
                optimal_w = cur_w
                optimal_x = cur_x
    return optimal_x
like image 61
ElKamina Avatar answered Sep 25 '22 02:09

ElKamina