Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Markov Model descision process in Java

I'm writing an assisted learning algorithm in Java.

I've run into a mathematical problem that I can probably solve, but because the processing will be heavy I need an optimum solution.

That being said, if anyone knows a optimized library that will be totally awesome, but the language is Java so that will need to be taken into consideration.

The idea is fairly simple:

Objects will store combination of variables such as ABDC, ACDE, DE, AE.

The max number of combination will be based on how many I can run without slowing down the program, so theoretically 100 lets say.

The decision process will generate one random variable per iteration. If the variable generated is part of one of the combinations, eg. 'A' which is part of ABDC and ACDE, than the propensity for C and B (or any following letter in a stored combination) will increase.

To make things a little more clear, lets assume that 'A', 'B', 'C', 'D', and 'E', are the only possible variables. The truth is, there will be more like 12 or 14, but that maximum will also depend on how many I can process without lag.

Since there are five possible variables, it will generate a weighted 1/5 random roll for the first iteration. If that roll turns out to be 'A', than in the next iteration 'B' and 'C' will now have 2/5 propensity instead of 1/5.

If the next iteration were to generate 'B', the 'D' propensity will increase to 3/5. Note: the relationship is exponential; realistically, it won't be 1/5 but a slight boost like 10%, which will snowball to say 50% if it reaches the 4th variable in a sequence.

Now, in Java, I can probably achieve this functionality by tracking all of the stored combinations for each object. I was thinking that by distributing the tracking process in small steps across each iteration, it shouldn't be too slow.

Another solution would be mapping all of the possible combinations and their potential propensities. This will of course simply require a search function, but also presents problems in calculating all of the possibilities and storing somewhere, probably in a file.

It has been suggested that I should use a Markov Model and/or library, though I am not too familiar with this type mathematics.

How can I compute this process quickly in Java?
.

Example >>>

Only one sequence ABC.

For three numbers, chances begin equal so it would look something like rand(1,3)

If A is the result, we increase the likelihood of B, because it is the next letter in the sequence. Lets say we double it.

So now the chances are: A=1/4, C=1/4, B=2/4

The function will now look like rand(1,4) where the results of 3 and 4 both represent option B.

If the next result is B we want to increase the likelihood of C because it is the next character in the sequence, but twice as much as it was increased last time (exponentially)

Chances are now something like: A=1/6, C=1/6, B=4/6

The function is now rand(1/6) where values 3, 4, 5, 6 represent C.

like image 877
bigcodeszzer Avatar asked Dec 18 '15 21:12

bigcodeszzer


1 Answers

You could write a C/C++ version if you want, and use the NDK (The overhead of the NDK is in the JNI translations from Java to C/C++ methods, but once there, they are much faster)

That is one idea. But... I don't think you have to go that far (at least to get a version that works for smaller sets) (maybe later moving to NDK might be a better option for LARGE sets)

I think you would be much better off treating this like an array of 'whole number fractions' aka... a two dimensional array for each set of action's probabilities. Meaning the numerators on the 'top row' and denominators on the 'bottom row'. Since the sets you are going to be working with are likely small, I'd think that a simple linked list of nodes where each node has its own set of probabilities would work. (These probabilities are the transition tables from S to S' from 'that' node.)

 int[][] probs = new int[100][2];

So you can think of it as ...

1 2 1 1

4 3 4 9

as 1/4, 2/3, 1/4, 1/9 with whole integer arithmetic. This would be easier in 'some' parts of the algorithm, because you will be able to make nice helper functions for "removeColumn' (make 0/0, and skip rest of processing, etc(or however you want to represent it)) and 'adjustProbabilities()'

(you might be able to get away with a single array of numerators if you make the denominators a single int (the lowest common denominator), but I'd probably make this an optimization after getting the 2D array version working)

Then just write the 'simple' generic P, R, and V methods that interact on that data for each node. Then make them adjustable/extensible/etc with good OO design.

Then just 'play with the numbers' for the discount factor, etc.

I think this is more of a 'just take the time to test it out' more than an issue about any really complex math algorithms, etc. because from what I see, there is not 'obvious' places to optimize the core algorithms.

like image 81
mawalker Avatar answered Sep 24 '22 01:09

mawalker