I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).
Then I realized I didn't even know if a NN is good for minimization. What do you think?
The Universal Approximation Theorem states that a neural network with 1 hidden layer can approximate any continuous function for inputs within a specific range. If the function jumps around or has large gaps, we won't be able to approximate it.
Given above is a description of a neural network. When does a neural network model become a deep learning model? More depth means the network is deeper. There is no strict rule of how many layers are necessary to make a model deep, but still if there are more than 2 hidden layers, the model is said to be deep.
Neural networks can not learn “basically any” mathematical function.
Although this comes a bit too late for the the author of this question. Maybe somebody wants to test some optimization algorithms, when he reads this...
If you are working with regressions in machine learning (NN, SVM, Multiple Linear Regression, K Nearest Neighbor) and you want to minimize (maximize) your regression-function, actually this is possible but the efficiency of such algorithms depends on smootheness, (step-size... etc.) of the region you are searching in.
In order to construct such "Machine Learning Regressions" you could use scikit- learn. You have to train and validate your MLR Support Vector Regression. ("fit" method)
SVR.fit(Sm_Data_X,Sm_Data_y)
Then you have to define a function which returns a prediction of your regression for an array "x".
def fun(x):
return SVR.predict(x)
You can use scipiy.optimize.minimize for optimization. See the examples following the doc-links.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With