Is regularisation a subset of normalisation ? I know normalisation is used when all the values are not on the same scale, but normalisation is also used to tone down the values, and so is regularisation .So what's the difference between two?
Normalisation adjusts the data; regularisation adjusts the prediction function.
As you noted, if your data are on very different scales (esp. low-to-high range), you likely want to normalise
the data: alter each column to have the same (or compatible) basic statistics, such as standard deviation and mean. This is helpful to keep your fitting parameters on a scale that the computer can handle without a damaging loss of accuracy.
One goal of model training is to identify the signal (important features) and ignore the noise (random variation not really related to classification). If you give your model free rein to minimize the error on the given data, you can suffer from overfitting: the model insists on predicting the data set exactly, including those random variations.
Regularisation
imposes some control on this by rewarding simpler fitting functions over complex ones. For instance, it can promote that a simple log function with a RMS error of x is preferable to a 15th-degree polynomial with an error of x/2. Tuning the trade-off is up to the model developer: if you know that your data are reasonably smooth in reality, you can look at the output functions and fitting errors, and choose your own balance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With