For the PyTorch.randn() method the documentation says:
Returns a tensor filled with random numbers from a normal distribution with mean
0and variance1(also called the standard normal distribution).
So here is an example tensor:
x = torch.randn(4,3)
tensor([[-0.6569, -0.7337, -0.0028],
[-0.3938, 0.3223, 0.0497],
[ 0.0129, -2.7546, -2.2488],
[ 1.6754, -0.1497, 1.8202]])
When I print the mean:
x.mean()
tensor(-0.2550)
When I print the standard deviation:
x.std()
tensor(1.3225)
So why isn't the mean 0 and the standard deviation 1?
Bonus question: How do I generate a random tensor that always has a mean of 0?
It would be a big coincidence that a finite sample of the distribution has exactly the same mean and exactly the same standard deviation. It is to be expected that the more numbers you generate, the closer the mean and deviation of the sample approaches the "true" mean and deviation of the distribution.
I can only answer half of this: I think you've misunderstood the documentation. It should not be parsed as "Returns a tensor {filled with random numbers from a normal distribution} with mean 0 and variance 1" but as "Returns a tensor filled with {random numbers from a normal distribution with mean 0 and variance 1}". I.e. the returned tensor does not have mean 0 or variance 1. It's only the distribution from which the random numbers are drawn that has mean 0 and variance 1.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With