Reading about how Google solves the translation problem got me thinking. Would it be possible to build a strong chess engine by analysing several million games and determining the best possible move based largely (completely?) on statistics? There are several such chess databases (this is one that has 4.5 million games), and one could potentially weight moves in identical (or mirrored or reflected) positions using factors such as the ratings of the players involved, how old the game is (to factor in improvements in chess theory) etc. Any reasons why this wouldn't be a feasible approach to building a chess engine?
The rule notes that only 20% of study time should be spent on the opener, 40% on the middlegame, and 40% on the endgame; this rule only applies to players with a ranking of less than 2000.
For equally rated players, the probability of a draw ranges from 35% to 55%, and the proportion of draws increases as the ratings increase.
Your Chess.com rating is accurate relative to someone else's chess.com rating, but not to someone's FIDE for example. It seems that nobody can explain why it is correct or wrong enough compared to (Wins+0.5Ddraws)/(Wins+Draws+Losses), the latter with placing opponents randomly from all ranges of ratings.
According to this algorithm, performance rating for an event is calculated in the following way: For each win, add your opponent's rating plus 400, For each loss, add your opponent's rating minus 400, And divide this sum by the number of played games.
Something like this is already done: it's the underlying concept of opening books.
Due to the nature of the game, computer AIs are notoriously bad in the beginning, when there are so many possibilities and the end goal is still far ahead. It starts to improve toward the middle, when tactical possibilities start to form, and can play perfectly in the end game far exceeding the capability of most humans.
To help the AI make good moves in the beginning, many engines rely on opening books instead: a statistically derived flowchart of moves, basically. Many games between highly rated players were analyzed, and recommendations are hard-coded into "the book", and while the positions are still in "the book", the AI doesn't even "think", and simply follow what "the book" says.
Some people can also memorize opening books (this is mostly why Fischer invented his random chess variant, so that memorization of openings becomes far less effective). Partly due to this, sometimes an unconventional move is made in the beginning, not because it's statistically the best move according to history, but precisely the opposite: it's not a "known" position, and can take your opponent (human or computer) "out of the book".
On the opposite end of the spectrum, there is something called endgame tablebase, which is basically a database of previously analyzed endgame positions. Since the positions were previously searched exhaustively, one can use this to enable perfect play: given any position, one can immediately decide if it's winning, losing, or draw, and what's the best way to achieve/avoid the outcome.
In chess, something like this is only feasible for the opening and endgame, though. The complexity of the middle game is what makes the game interesting. If one can play chess simply by looking up a table, then the game wouldn't be as exciting, interesting, and deep as it is.
Well, 4.5 million games still only covers a very tiny (infinitesimal small) fraction of all possible games.
And while you would have a large number of winning and loosing positions, that would leave the problem of reducing that to a usable set of parameters. A very old problem, with neural networks as a standard approach. But NeuralNets aren't winning chess tournaments.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With