I'm trying to add to the code for a single layer neural network which takes a bitmap as input and has 26 outputs for the likelihood of each letter in the alphabet.
The first question I have is regarding the single hidden layer that is being added. Am I correct in thinking that the hidden layer will have it's own set of output values and weights only? It doesn't need to have it's own bias'?
Can I also confirm that I'm thinking about the feedforward aspect correctly? Here's some pseudocode:
// input => hidden
for j in hiddenOutput.length:
sum=inputs*hiddenWeights
hiddenOutput[j] = activationFunction(sum)
// hidden => output
for j in output.length:
sum=hiddenOutputs*weights
output[j] = activationFunction(sum)
Assuming that is correct, would the training be something like this?
def train(input[], desired[]):
iterate through output and determine errors[]
update weights & bias accordingly
iterate through hiddenOutput and determine hiddenErrors[]
update hiddenWeights & (same bias?) accordingly
Thanks in advance for any help, I've read so many examples and tutorials and I'm still having trouble determining how to do everything correctly.
Dylan, this is probably long after your homework assignment was due, but I do have a few thoughts about what you've posted.
The thing I learned about neural nets is that you never know why they're working (or not working). That alone is reason to keep it out of the realms of medicine and finance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With