Let's say I have 20 players [names A .. T] in a tournament. The rules of the tournament state that each player plays every other player twice [A vs B, B vs A, A vs C .. etc]. With 20 players, there will be a total of 380 matches.
In each match, there are three possible outcomes - player 1 wins, player 2 wins, or draw. There's a betting exchange which, ahead of each match, quotes the probabilites of each outcome occuring; so you might have 40% player 1 wins, 30% player 2 wins, 30% draw [probabilities sum to 100%]; I store these probabilities ahead of each match.
Fast forward one quarter of the way through the tournament. I have collected probabilities for 95 games, with 285 still to go. What I want to know is -
Can the probability data from the 95 games be used to predict probabilities for the remaining 285 ?
For example, if I know A vs B and B vs C, can I use them to infer A vs C ?
And if so, how do I do it ?
Inference rules are the templates for generating valid arguments. Inference rules are applied to derive proofs in artificial intelligence, and the proof is a sequence of the conclusion that leads to the desired goal. In inference rules, the implication among all the connectives plays an important role.
Machine learning (ML) inference is the process of running live data points into a machine learning algorithm (or “ML model”) to calculate an output such as a single numerical score.
Probably the greatest challenge facing the AI industry is the need to reconcile AI's need for large amounts of structured or standardized data with the human right to privacy. AI's 'hunger' for large data sets is in direct tension with current privacy legislation and culture.
Let me introduce you to my good friend Bayes... http://en.wikipedia.org/wiki/Bayesian_inference
Edit: Part 1) Bayes will only work for non-independent trials. If winning one game somehow increases your probability of winning the next, you can carry on! Otherwise this isn't very helpful at all.
Edit: Part 2) Regardless, the base is the following Bayes' Formula.
P(A|B) = P(B|A) P(A)
-----------
P(B)
Which is read, "The probability of A given B is equal to Prob. B given A times Prob of A all over Prob. of B". To illustrate this, the car salesman with 3 doors problem is often given.
You have 3 doors and behind one door there's a brand new car. The other two doors have absolutely nothing. The host then asks you to pick a door. Remember, there is door 'A', 'B' and 'C'. Therefore, you have a 1/3 probability of being correct.
The host, being a generous guy, opens one of the other doors. He now gives you the option of either sticking with the same door or opening the other door.
I realized that explaining this in a Stackoverflow reply would take forever and just googled it. This is the Monty Hall problem: http://en.wikipedia.org/wiki/Monty_Hall_problem. http://en.wikipedia.org/wiki/Monty_Hall_problem#Bayesian_analysis for the Bayes section.
Edit: Part 3) You may want to look up 'Bayesian Networks' if you decide this sort of approach can work (but on a much grander scheme)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With