Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Neural Networks normalizing output data

I have a training data for NN along with expected outputs. Each input is 10 dimensional vector and has 1 expected output.I have normalised the training data using Gaussian but I don't know how to normalise the outputs since it only has single dimension. Any ideas?

Example:

Raw Input Vector:-128.91, 71.076, -100.75,4.2475, -98.811, 77.219, 4.4096, -15.382, -6.1477, -361.18

Normalised Input Vector: -0.6049, 1.0412, -0.3731, 0.4912, -0.3571, 1.0918, 0.4925, 0.3296, 0.4056, -2.5168

The raw expected output for the above input is 1183.6 but I don't know how to normalise that. Should I normalise the expected output as part of the input vector?

like image 285
PRCube Avatar asked Mar 05 '17 03:03

PRCube


People also ask

What is normalization in neural network?

Normalization is done to map the data to a uniform scale. For instance, when the inputs to ANN are on widely different scales, normalization is normally used to get the same range of values for each of the input features.

How to normalize the output of one-dimensional data?

Having one-dimensional output space, you could not normalize the output. The correct way is, indeed, to take a complete batch of training data, say N input (and output) vectors, and normalize each dimension (variable) individually (using N samples). Thus, for one-dimensional output, you will have N samples for normalization.

Why do we have to normalize input features before feeding them?

There are 2 Reasons why we have to Normalize Input Features before Feeding them to Neural Network: Reason 1: If a Feature in the Dataset is big in scale compared to others then this big scaled feature becomes dominating and as a result of that, Predictions of the Neural Network will not be Accurate.

What is the advantage of normalizing the data?

Normalizing the data generally speeds up learning and leads to faster convergence. Also, the (logistic) sigmoid function is hardly ever used anymore as an activation function in hidden layers of Neural Networks, because the tanh function (among others) seems to be strictly superior.


1 Answers

From the looks of your problem, you are trying to implement some sort of regression algorithm. For regression problems you don't normally normalize the outputs. For the training data you provide for a regression system, the expected output should be within the range you're expecting, or simply whatever data you have for the expected outputs.

Therefore, you can normalize the training inputs to allow the training to go faster, but you typically don't normalize the target outputs. When it comes to testing time or providing new inputs, make sure you normalize the data in the same way that you did during training. Specifically, use exactly the same parameters for normalization during training for any test inputs into the network.

like image 74
rayryeng Avatar answered Oct 17 '22 06:10

rayryeng