I am wondering if I am doing something wrong or if results are really that poor. Lets assume the simplest NN examples as shown in documentation:
>>>net = buildNetwork(2, 3, 1, bias=True)
>>> ds = SupervisedDataSet(2, 1)
>>> ds.addSample((0, 0), (0,))
>>> ds.addSample((0, 1), (1,))
>>> ds.addSample((1, 0), (1,))
>>> ds.addSample((1, 1), (0,))
>>> trainer = BackpropTrainer(net, ds)
>>> trainer.trainUntilConvergence()
>>> print net.activate((0,0))
>>> print net.activate((0, 1))
>>> print net.activate((1, 0))
>>> print net.activate((1, 1))
e.g
>>> print net.activate((1,0))
[ 0.37855891]
>>> print net.activate((1,1))
[ 0.6592548]
Expected was 0. I know I can round obviously BUT still I would expect the network to be lot more precise for such a simple example. It can be called "working" here BUT I suspect I am missing something important cause this is VERY unusable...
The thing is that if you set verbose=True
to your trainer you can see pretty small errors (like Total error: 0.0532936260399)
I would assume the error of the network is 5%, then how can it be SO MUCH off in activate function after that?
I use pybrain for a lot more complex thing obviously, but I have the same problem. I get roughly 50% of my test samples wrong even though the network says error is like 0.09 or so.
Any help pls?
A similar question can be found here. From there, it seems that this training function is not applicable here because not all data is used for training, but some data is used for cross-validation. Try adding the data points to the training set multiple times.
Also, this example seems to need to have a momentum term to work. There is an example for training an xor with pybrain, using a different training method here which worked for me when I set the number of layers to 3. This uses a momentum term of 0.99.
I would post this as a comment because it doesn't answer the question fully, but I don't have enough points to comment...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With