Forum Archive :
Probability and Statistics
Expected variation in points after a series of games
At the moment I'm playing a few longer money sessions against two guys
and (for training) 1000 money games against Snowie 1.3. While thinking
about some longer winning and loosing streaks I was wondering, if there
is reliable statistical material available about what may happen in a
given amount of games.
What I mean:
1) Let's assume you play against someone 1000 money games. You and your
opponent are ranked between expert and world class level (if that
matters). You play using the jacoby rule, beaver and raccoons, but no
automatics. Let's further assume, that the gammon probability is round
What is the interval of possible results with a given 95% probability?
2) Nearly the same, but now you play only 100 games.
3) And now you stop at 50 games.
4) To make it more difficult, the same scenario as shown above, but now
you are 5% better than your opponent (equity = +0.01 per game).
Years ago I studied economics including two semesters statistic. I know
that there is a possibility to calculate it, but I'm too short of time
to figure it out again.
Can anyone help?
Stig Eide writes:
Some time ago I made a formula which might be useful to you:
If you play n games, a 96% confidenceinterval for two
equally good players is 0 +/- 5*sqrt(n) points.
That is, if you play 100 games, you should be prepared
to lose 50 points.
If the players are of inequal strength, the 0 in the formula
can be replaced with ppg*n. But I don't think it's an easy
thing to calculate the ppg. But hey, David Montgomery made a formula
once to translate between fibs rating and moneygame.
*searching* please wait.
50 FIBS points are worth about 0.1 ppg.
Gary Wong writes:
> 1) Let's assume you play against someone 1000 money games.
> What is the interval of possible results with a given 95% probability?
A reasonable model for the distribution of points after a long series of
money games is that points per game after n games are normally distributed
with mean pn and variance 9n (where p is the advantage you have in
points per game). This is just an approximate model and is not tailored
to the parameters you specified, but it seems to fit the data I have
A 95% prediction interval for the score after 1000 games between evenly
matched players is from -186 to +186 points.
> 2) Nearly the same, but now you play only 100 games.
Now the interval is from -59 to +59 points.
> 3) And now you stop at 50 games.
-42 to +42 points.
> 4) To make it more difficult, the same scenario as shown above, but now
> you are 5% better than your opponent (equity = +0.01 per game).
I'm not sure what you mean by 5% better (expecting to win 55% of the games
would be worth well over 0.01ppg). 0.01ppg really is pretty tiny, but if
you had an advantage of that amount, the three intervals above would become
-176 to +196, -58 to +60 and -41 to +42 points respectively.
Achim Müller writes:
Ooops! Of course I meant 0.1 ppg. Sorry for that mistake, tough night
last night ;-)
But now I have two models:
1. Stig Eide
> If you play n games, a 96% confidenceinterval for two
> equally good players is 0 +/- 5*sqrt(n) points.
2. Gary Wong
> A reasonable model for the distribution of points after a long series of
> money games is that points per game after n games are normally
> distributed with mean pn and variance 9n (where p is the advantage you
> have in points per game).
Could you, Gary, explain where the 9 in "9n" comes from?
Gary Wong writes:
Sure. There's no deep theoretical reason for picking that value, but an
observation of several hundred games of mine showed a sample variance
of 8.4. This result and others I've occasionally seen posted here
lead me to estimate that the true variance is somewhere around 9.
(Obviously it will depend on the players, the rules, etc. etc., but
9 is a convenient rule of thumb.)
The variance I estimate agrees fairly closely with Stig's; a variance of
9 is a standard deviation of 3 so I would predict a +/-2 sigma interval
of 0 +/- 6*sqrt(n) points -- essentially identical to his result, at this
level of accuracy.
(The normal distribution is a bad fit for small numbers of games of course,
because backgammon game distributions have very long tails because of the
relatively high probability of obtaining a score a long way from the mean,
because of the cube. It's not until you add up a lot of games that the
central limit theorem kicks in and this approximation becomes reasonable.
In practice I wouldn't be too worried about weird cube effects once there
were, say, 50 or more games.)
Probability and Statistics
- Average game and match length (JP White, Dec 2000)
- Average luck of each roll (Timothy Chow+, Mar 2013)
- Average luck of each roll (Jørn Thyssen+, Feb 2004)
- Calculating winning chances (Douglas Zare, June 2000)
- Chance of rolling x doubles in y rolls (Raccoon+, July 2007)
- Chance of rolling x or more pips in y rolls (Tom Keith, Feb 2004)
- Clumping of random numbers (Gary Wong, Sept 1998)
- Counting shots (Koyunbaba+, June 2007)
- Counting shots (John Little+, Mar 2007)
- Distribution of points per game (Roland Sutter, June 1999)
- Distribution of points per game (Stig Eide+, Sept 1995)
- Expected variation in points after a series of games (Achim Müller+, Feb 1999)
- How many games to decide who's better? (Stephen Turner, Mar 1997)
- How often is too often? (Gary Wong, Oct 1998)
- Losing after bearing off 14 checkers (Daniel Murphy, July 1999)
- Number of games per match (Jason Lee+, Jan 2005)
- Number of rolls to enter x checkers from bar (Michael Depreli+, Mar 2011)
- Visualizing odds (Daithi, Mar 2011)