How is acpl calculated? #8867
-
I've been getting interested in chess statistics, and wanted to be able to calculate average centipawn loss in a way that would be directly comparable to lichess. I have a general idea how it is done, but I can't find very much information on the specifics. Also, I can't read Scala, so looking through the code doesn't help me very much. There is a little information in the lichess FAQ, but if you dig into the numbers, there is clearly more to it, because if you just take the average of the loss for each move, on many games you will get a quite different number than lichess. I found a discussion suggesting that chess.com implements a cap of +-1000 centipawns on the evaluations used in the calculation. I tried this with some data from my games on lichess, and I get answers that are very close to lichess. However, something is a little bit off. My numbers tend to be a bit lower than lichess, and rarely match exactly, even though I'm using lichess's own evaluations to do my calculations. Here's a graph that shows how my numbers compare to lichess. The red line is y = x. In other words, if I matched lichess' acpl exactly, all the black dots would fall on the red line. Instead, my numbers trend a little bit low, especially for low acpl. In one case, I even get negative acpl! Can someone explain the exact methods used by lichess for this calculation? (I use the plural "methods" because I saw someone pointing out that there are differences between acpl for analysis vs. Insights.) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
Here is the relevant code: https://github.com/ornicar/lila/blob/master/modules/analyse/src/main/Accuracy.scala For white, it starts at 15cp and for black with the eval after the first move (i.e. before black's first move). Values are capped at 1000 and mates are also counted like that. Then, calculate the difference between before and after each turn and cap the value at 0 (i.e. you can always only make a position worse or hold it, never improve it). And ofc you need to take care that the sign is changed appropriately depending on which side you are looking at, i.e. negate the value for white since the value is displayed as a loss. I think that should be it. Then just take the mean of all those differences, i.e. sum(diffs)/len(diffs). |
Beta Was this translation helpful? Give feedback.
Here is the relevant code: https://github.com/ornicar/lila/blob/master/modules/analyse/src/main/Accuracy.scala
For white, it starts at 15cp and for black with the eval after the first move (i.e. before black's first move). Values are capped at 1000 and mates are also counted like that. Then, calculate the difference between before and after each turn and cap the value at 0 (i.e. you can always only make a position worse or hold it, never improve it). And ofc you need to take care that the sign is changed appropriately depending on which side you are looking at, i.e. negate the value for white since the value is displayed as a loss.
I think that should be it. Then just take the mean of all…