First off, this is homework. I think it's clear I've made an effort and I'm looking for hints, not code.
The problem is the following. The equation of operation has four components for altering a given neuron.
If I weight D heavily enough that it has any effect, the network settles on an invalid tour (for example, visit A, D, nowhere, E, C). I can, however, deweight D and the code will find solutions, but not those with minimal distance.
I'd be extremely grateful for any advice, I've been banging my head against the keyboard for a while. The code should be understandable by anyone familiar which solving the TSP with a Hopfield network.
Das Code:
%parameters
n=5;
theta = .5;
u0 = 0.02;
h = .1;
limit = 2000;
%init u
u=zeros(n,n);
uinit = -u0/2*log(n-1); %p94 uINIT = - u0/2 * ln(n-1)
for i=1:n
for j=1:n
u(i,j) = uinit * (1+rand()*0.2-0.1); %add noise [-0.1*uInit 0.1*uINIT]
end
end
%loop
for index=1:limit
i = ceil(rand()*n);
k = ceil(rand()*n);
%runge kutta
k1 = h*du(u,i,k,0);
k2 = h*du(u,i,k, k1/2);
k3 = h*du(u,i,k, k2/2);
k4 = h*du(u,i,k, k3);
u(i,k) = u(i,k) + (k1 + 2*k2 + 2*k3 + k4)/6;
end
Vfinal = hardlim(V(u)-theta)
du()
function out=du(u,X,i,c)
dist = [0, 41, 45, 32, 32;
41, 0, 36, 64, 54;
45, 36, 0, 76, 32;
32, 64, 76, 0, 60;
32, 54, 32, 60, 0];
t = 1;
n = 5;
A = 10;
B = 10;
C = 10;
D = .0001;
AComp = A*sum(V(u(X,:))) - A*V(u(X,i));
BComp = B*sum(V(u(:,i))) - B*V(u(X,i));
CComp = C*(sum(sum(V(u)))-n);
DComp = 0;
before = i-1;
after = i+1;
if before == 0
before = 5;
end
if after == 6
after = 1;
end
for Y=1:5
DComp = DComp + dist(X,Y) * (V(u(Y,after)) + V(u(Y,before)));
end
DComp = DComp * D;
out = -1*(u(X,i)+c)/t - AComp - BComp - CComp - DComp;
V()
function out=V(u)
u0 = 0.02;
out = (1 + tanh(u/u0))/2;
I have never tried solving the TSP with a neural network, but I have found that it solves very well, and very quickly, taking a genetic approach.
I have done many neural network projects, though, and I would guess that since the TSP can, in general, have many solutions over a single network (of cities), that the neural network could be dragged back and forth between solutions, never really successfully converging on any one.
John R. Doner
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With