Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can Tensorflow be used for global minimization of multivariate functions?

I've been curious if TF can be used for global optimization of a function. For example, could it be used to efficiently find the ground state of a Lennard-Jones potential? Would it be any better or worse than existing optimization methods, like Basin-Hopping?

Part of my research involves searching for the ground state of large, multi-component molecules. Traditional methods (BH, ect.) are good for this, but also quite slow. I've looked into TF and there are parts that seem robust enough to apply to this problem, although my limited web search doesn't appear to show any use of TF to this problem.

like image 456
Christopher Mauney Avatar asked Nov 06 '17 06:11

Christopher Mauney


People also ask

What is Tensorflow Optimizer?

Optimizers are the extended class, which include added information to train a specific model. The optimizer class is initialized with given parameters but it is important to remember that no Tensor is needed. The optimizers are used for improving speed and performance for training a specific model.

What does the minimize function of Optimizer do?

Calling minimize() takes care of both computing the gradients and applying them to the variables. If you want to process the gradients before applying them you can instead use the optimizer in three steps: Compute the gradients with tf. GradientTape .


1 Answers

The gradient descent performed to train neural networks is considering only a local region of the function. There is thus no guarantee that it will converge toward a global minimum (which is actually fine for most machine-learning algorithms ; given the really high dimensionality of the considered spaces, one is usually happy to find a good local minimum without having to explore around too much).

That being said, one could certainly use Tensorflow (or any such frameworks) to implement a local optimizer for the global basin-hopping scheme, e.g. as follows (simplified algorithm):

  1. Choose a starting point;
  2. Use your local optimizer to get the local minimum;
  3. Apply some perturbation to the coordinates of this minimum;
  4. From this new position, re-use your local optimizer to get the next local minimum;
  5. Keep the best minimum, repeat from 3.

Actually, some people are currently trying to implement this exact scheme, interfacing TF with scipy.optimize.basinhopping(). Current development and discussions can be found in this Github issue.

like image 173
benjaminplanche Avatar answered Sep 22 '22 11:09

benjaminplanche