I am trying to optimize a device design using Matlab optimization toolbox (using the fmincon
function to be precise). To get my point across quickly I am providing a small variable set {l_m, r_m, l_c, r_c} which at it's starting value is equal to {4mm, 2mm, 1mm, 0.5mm}.
Though Matlab doesn't specifically recommend normalizing the input variables, my professor advised me to normalize the variables to the maximum value of {l_m , r_m, l_c, r_c}. Thus the variables will now take values from 0 to 1 (instead of say 3mm to 4.5mm in the case of l_m). Of course I have to modify my objective function to convert it back to the proper values and then do the calculations.
My question is: do optimization functions like fmincon
care if the input variables are normalized? Is it reasonable to expect change in performance on account of normalization? The point to be considered is how the optimizer varies variables like say l_m — in one case it can change it from 4mm to 4.1mm and in the other case it can change it from 0.75 to 0.76.
The goal of normalization is to change the values of numeric columns in the dataset to a common scale, without distorting differences in the ranges of values. For machine learning, every dataset does not require normalization. It is required only when features have different ranges.
The normalization plays an impor- tant role in ensuring the consistency of optimal solutions with the preferences expressed by the decision maker. We also compare several approaches to solve the problem assuming that a linear or a mixed integer programming solver, such as CPLEX, is available.
Normalization is a technique for organizing data in a database. It is important that a database is normalized to minimize redundancy (duplicate data) and to ensure only related data is stored in each table. It also prevents any issues stemming from database modifications such as insertions, deletions, and updates.
Full normalisation will generally not improve performance, in fact it can often make it worse but it will keep your data duplicate free.
It is usually much easier to optimize when the input is normalized. You can expect an improvement in both speed of convergence and in the accuracy of the output.
For instance, As you can see on this article ( http://www-personal.umich.edu/~mepelman/teaching/IOE511/Handouts/511notes07-7.pdf ), the convergence rate of gradient descent is better bounded when the ratio of largest and smallest eigenvalues of the Hessian is small. Typically, when your data is normalized, this ratio is 1 (optimal).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With