Can a neural network emulate factorial decomposition (or some other method) to provide a list permutation given the permutations unique index?
I have a list of 10 things, and what they are is irrelevant. What I care about is that my 10 things can be placed into 3628800 (or 10!) unique orders, because then I can express any list order of my 10 things using an unsigned integer and factorial decomposition:
Order 0: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Order 1: 0, 1, 2, 3, 4, 5, 6, 7, 9, 8
Order ....
Order 3628799: 9, 8, 7, 6, 5, 4, 3, 2, 1, 0
This allows for the parallel distribution of analysis on different list orders of my 10 things.
A common example being the travelling salesman problem:
1. I give 500 different computers each a range of unsigned integers:
0 -> 7257 for computer 0,
7257 -> 14516 for computer 1,
etc.
2. Each computer first calculates the list order from it's unsigned integer
index by using factorial decomposition.
ie. Order 1 -> 0, 1, 2, 3, 4, 5, 6, 7, 9, 8
3. The distance between the cities placed in the order described is calculated.
4. The shortest distances from each computer is collected, and the shortest
of those is taken. Leaving us with a single unsigned integer index that
describes the shortest possible permutation of cities.
The same process can be used to solve virtually any boundable error surface, given often far more than feasible computational power.
We can calculate the N-th permutation of any fixed size list (granted we will need large integer support for bigger lists) using factorial decomposition (outlined here in php), and provided here in javascript for clarity:
function ithPermutationOfNElements (n, i)
{
var j, k = 0;
var fact = [];
var perm = [];
// compute factorial numbers
fact[k] = 1;
while (++k < n)
fact[k] = fact[k - 1] * k;
// compute factorial code
for (k = 0; k < n; ++k)
{
perm[k] = Math.floor(i / fact[n - 1 - k]);
i = i % fact[n - 1 - k];
}
// readjust values to obtain the permutation
// start from the end and check if preceding values are lower
for (k = n - 1; k > 0; --k)
for (j = k - 1; j >= 0; --j)
if (perm[j] <= perm[k])
perm[k]++;
return perm;
}
console.log(ithPermutationOfNElements(4, 23)); // [ 3, 2, 1, 0 ]
Can any neural network architecture & training combination emulate this function given i as it's only input neuron and n output neurons representing each element of the permutation?
Neural networks can provide robust solutions to problems in a wide range of disciplines, particularly areas involving classification, prediction, filtering, optimization, pattern recognition, and function approximation.
If your neural network got the line right, it is possible it can have a 100% accuracy. Remember that a neuron's output (before it goes through an activation function) is a linear combination of its inputs so this is a pattern that a network consisting of a single neuron can learn.
Use of neural networks prediction in predictive analytics Neural networks work better at predictive analytics because of the hidden layers. Linear regression models use only input and output nodes to make predictions. The neural network also uses the hidden layer to make predictions more accurate.
The number of hidden neurons should be between the size of the input layer and the size of the output layer. The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer. The number of hidden neurons should be less than twice the size of the input layer.
A neuron can operate as a logic gate, and thus a Neural Network can perform any calculation a computer can. However in that sense it simply emulates logic gates inefficiently, using high level code, so is not a good solution for this problem.
In general, neural networks are good with 'real' or 'natural' data. They also generally operate with floats, not integers. So if there is a pattern to be learnt, a NN might learn it, but the output answer you will get will be eg 0.783267. You could then denormalize this to 89743, but its unlikely to be exactly right. For your requirement, one integer off the right answer is completely wrong.
By contrast, for a face recognition NN returning 0.787 or 0.786 for a particular image, both could be considered correct.
Your problem is better suited to a traditional, procedural code solution, with only one correct answer for each input. Generally in AI, you are looking for the correct answer, within a certain range or probability distribution.
Regarding implementing algorithms with NNs:
You can have many neurons acting as logic gates, so now you have neuron nand gate / flipflops etc acting as adders/multipliers/latches etc, until you have essentially built a turing machine, but explicitly using high level code. It will in no way resemble a normal NN as they are used by the majority of the AI world. Further, you already have a perfectly good turing machine right in front of you.
Here is the code for a Neural Network AND gate in Matlab. No training is required. I've used configure
instead of train
, and just set the weights manually. So making the other logic types you could build an entire turing machine.
and = feedforwardnet(1);
truthTable = [0 0 1 1; 0 1 0 1];
and_out = [0 0 0 1];
and = configure(and, truthTable, and_out);
vals = [-2 -2 -1 2 0];
and.IW{1} = vals(1:2); % input1 to hidden, input2 to hidden
and.LW{2,1} = vals(3); % hidden to output
and.b{1} = vals(4); % bias to hidden
and.b{2} = vals(5); % bias to output
y = sim(and, truthTable)
round (y)
mse = mean ((y - and_out) .^ 2)
y =
0.0000 0.0180 0.0180 0.9820
ans =
0 0 0 1
mse =
2.4263e-04
A little known fact is that recurrent neural nets are Turing complete, and can thus perform any computation a computer can (see result by Siegelmann).
This does neither mean (a) that you can easily find the necessary weights by a learning algorithm or (b) that a feed forward net, as you probably are looking at, can do it.
Nevertheless, this seems like a task you don't want to do with a neural net.
Generally, neural networks are universal function approximators, so in theory yes.
More specifically, notice that for a particular (fixed) i value, the neural network that solves it is trivial (and in fact, requires no hidden nodes or activation functions. It is a linear problem).
As a brute force, naive solution, for the general problem with unfixed i: a neural network large enough to encode all 10! possible linear encodings, with a hidden layer essentially acting as a mux based on the i input, would solve the problem. More efficient networks probably exist, and it would be interesting to try a recurrent architecture for this problem.
In any case, while a solution certainly exists, the better question is whether this is a good way to solve it. If a problem boils down to some simple psuedocode, I would avoid a neural network implementation unless it is for academic purposes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With