Cube Theory |

*Vision Laughs at Counting, Vol 2*, © 1980 Danny Kleinman

## Probability of Winning

Backgammon theorists have tried to relate proper cube actions to something they call the *probability of winning* the game. But this concept is shattered by the existence of the doubling cube. Sometimes the cube is turned to double out an opponent, ending the game before its natural completion. At other times cube turns increase the stakes of the game, making some games weigh far more heavily than others. Thus the cube affects both the frequency of winning and the size of games won.

Let us define various meaningful probability concepts for backgammon. First let us define a truly hypothetical variable *r*, the *raw winning probability.* Variable *r* becomes the true probability of winning only in a match where the cube can no longer be turned and gammons do not matter, for example, when both sides have reached match point. Nonetheless, *r* is important because it is easiest to conceive and calculate, and other probabilities may be related to it.

Once we have any probability of winning, we can define equity, or *money expectation*. Let *Q* be the size of the cube (or current stake). If *W* is the winning probability, the losing probability is 1 − *W*, and the equity *E* = *Q*(2*W* − 1).

We can also take gammons and backgammons into consideration. Suppose that *G* is our probability of winning a gammon and *g* is our probability of losing a gammon. Likewise let *B* be our probability of winning a backgammon and *b* be our probability of losing a backgammon. A gammon means 1 extra unit of *Q*, while a backgammon means 2 extra units. Thus

*E*=

*Q*(2

*W*− 1 +

*G*−

*g*+ 2

*B*− 2

*b*).

*X*, the “gammon-adjusted winning equivalent,” by adding to

*W*half of the net gammons and all of the net backgammons. Thus

*X*=

*W*+

G − g |

2 |

*B*−

*b*), and

*E*=

*Q*(2

*X*− 1).

*r*already to have been gammon-adjusted in this way. In using

*r*to help decide when to pass or take, therefore, we will automatically account for gammon threats. If we consider cube turns, the stakes drop out of the equations so we may, without loss of generality, let

*Q*= 1. Now we can state both equity and winning probability in terms of each other.

*E*= 2

*W*− 1 and

*W*=

E + 1 |

2 |

*W*may not be an actual winning probability, of course, but rather

*W*is an

*equity-equivalent*winning probability which translates all games into the current stakes in effect.

In addition to *r*, every backgammon position has associated with it three other winning probabilities corresponding to the three possible locations of the cube. Let *u* be the probability when the cube is *unavailable* because it is owned by the opponent. Let *c* be the probability when the cube is in the *center*. And let *p* be the probability when the cube is *possessed*.

In general, *u* is less than *r*, and *p* is greater than *r*. Probability *c* may be either greater or less than *r*, rarely equal. This difference between *c* and *r* occurs because the cube in the center favors the player with the stronger position, who is more apt to be able to use the cube.

We can establish that we can always take a double if *r* is at least 1⁄4. Our equity by passing, of course, is −1. By taking, we obtain equity of 2(2*p* − 1), or 4*p* − 2. If *r* is at least 1⁄4, then so is *p*, and therefore 4*p* − 2 is no lower than −1. This does not prove that we require *r* to be at least 1⁄4 in order to take. For, since *p* is greater than or equal to *r*, *p* can at least equal 1⁄4 when *r* is still less than 1⁄4.

## The Continuous Model

To establish what we must require *r* to be in order to take a double, we must set an upper limit on the equity-equivalent probability conferred by possession of the cube. We will do this by making certain contrary-to-fact assumptions whose effect will always be in the direction of exaggerating the worth of owning the cube. These are the assumptions of the Continuous Model.

Let us assume that as either side approaches victory in a backgammon game, the probability *r* passes through all values on the way to 1 (or 0). Let us also grant the owner of the cube the power to turn the cube at any precise moment he wants; in particular, when *r* rises exactly to the point where his opponent’s equity will be −1 equally by passing or taking. As Spencer and Keeler, proponents of the Continuous Model, show, the probability of *r* rising to *b* before *r* falls to *a*, assuming *r* lies in the interval (*a*,*b*), is

r − a |

b − a |

*x*be that value of

*r*for which a pass and a take equally yield equity of −1. We know that the winning probability owning the cube,

*p*, is exactly 1⁄4 when this happens. But 1⁄4 must also be the probability of

*r*rising to 1 −

*x*before falling to 0 (for we are assuming that our opponent adopts our optimum pass-or-take policy also). This is simply

x |

1 − x |

x |

1 − x |

1 |

4 |

*x*= 1⁄5.

In fact, we can make an even more optimistic assumption: That at the time we are ready to redouble our opponent, the cube will have become worthless to him, so that our equations changes to

1 |

4 |

x |

1 − 1⁄4 |

*x*= 3⁄16.

There is actually a simple backgammon position where this is true, where raw winning chances of only 3⁄16 allow us to take the cube: the bear-off of one man on the 6-point versus one man on the 6-point!

| White doubles. We have an optional take even though our raw winning chances are only 3/16. |

Even here we lose nothing by passing. It is reasonable to conclude that we can always pass a double if *r* is no more than 1⁄5.

The dividing line between passing and taking thus varies in the range between *r* = .20 and *r* = .25. It is .20 when the cube is completely *alive* and .25 when the cube is totally *dead*. Usually it is somewhere between, corresponding to various degrees of “crippling” of the cube.

The Continuous Model, in effect, assigns a worth to owning the cube of *r*/4 extra winning chances. Thus

*p*=

5r |

4 |

*c*=

5r − 1 |

3 |

*u*=

5r − 1 |

4 |

*r*.

## The Discontinuous Model

In real backgammon, our winning chances do not progress gradually and smoothly to the point where we can double optimally, but rather in leaps and bounds. The Continuous Model prescribes doubling when *r* = 0.80 exactly. But in practice, if *r* is 0.70 or 0.75 this turn, by the next turn *r* may have jumped abruptly to 0.85 or 0.90. We cannot expect ever to hit *r* = 0.80 on the nose. Let us modify the continuous Model to reflect this.

### Late Double

We must weigh the risks in doubling too early against the risks in doubling too late. The risks in doubling too late are easy to compute. Suppose we wait until *r* rises to 0.80 + *e* before doubling. Our opponent passes, and we win an equity of exactly 1. But if we had doubled while our opponent would still take, our equity would be 4*u* − 2. Now

*u*=

5r − 1 |

4 |

4 + 5e − 1 |

4 |

*e*− 1 − 2, or 1 + 5

*e*. The equity we lose by not doubling is 5

*e*.

### Early Redouble

The risks in doubling too soon depend on the prior location of the cube, whether we are making an initial double or redoubling. An initial double relinquishes only our option to defer the doubling decision. But a redouble gives our opponent an option of doubling he wouldn’t have if we kept the cube. An early redouble thus figures to be far more hazardous than an early double.

Let us suppose we redouble while *r* still has not yet reached 0.80, so that for some small positive *e*, *r* = 0.80 − *e*. By keeping the cube we would have retained equity of 2*p* − 1. Now

*p*=

5r |

4 |

4 − 5e |

4 |

e.
| 1 |

*u*− 2. Since

*u*=

5r − 1 |

4 |

4 − 5e − 1 |

4 |

4 − 5e − 1 − 2, or 1 − 5e.
| 2 |

*e*) − (1 − 5

*e*) = 2.5

*e*.

### Early Initial Double

Now let us see the difference when we offer an early initial double. The equity we retain by not doubling is 2*c* − 1. Since

*c*=

5r − 1 |

3 |

4 − 5e − 1 |

3 |

| 3 |

10e |

3 |

*e*) =

5e |

3 |

### When to Double

Thus it is twice as costly to redouble a little too late than a little too soon; and three times as costly to offer initial doubles too late than too soon. Furthermore, since the optimum value of *r* for turning the cube is less than 0.80 due to the fact that our opponent cannot use the cube with the perfect efficiency assumed by the Continuous Model, let us substitute, say, the compromise value of *r* = 0.78 as our optimum doubling point. It is clear that we should double or redouble somewhat *before* *r* rises to 0.78, however. For, if we don’t, *r* will have to rise somewhat above 0.78, costing us equity when we then double our opponent out.

How close to 0.78 must *r* approach to justify a cube turn? Considerably closer for a redouble than for a double, of course. But we must figure using yet another variable: the volatility of the position. This is how much the position will tend to change from one turn to the next.

At the start of a game, positions do not tend to be very volatile. No single shake of the dice is yet decisive. In complicated midgame positions one shake can turn the game around substantially: Primes can crumble, or retarded back men can escape. In holding games and back games, the leaving of shots can produce volatile positions. Hitting the shot may virtually guarantee victory, while missing may almost preclude getting any subsequent chances to win. If a game converts into a pure race, then the early stages of that race will have little volatility. But as the race enters the later stages of the bear-off, the positions once again become highly volatile. A single miss or a single large doublet can make a vast difference in winning chances.

In relatively stable positions, you need not hurry to turn the cube. Let *r* approach the optimum gradually! For example, suppose that after your opponent has borne about half his men off, you hit a shot and succeed in closing your board with the one enemy man still on the bar. Whether *r* will just about reach 0.78 may depend on how smoothly you can bear in and start to bear off. While you are still bringing your men home, your winning chances will change only slightly. You may as well delay your decision to redouble.

Another example of a position where you can delay deciding to redouble is a different kind of close-out. This time you have accepted the cube early in the game, then turn the game around with a sudden joker. Now you have a gammon threat with an opposing man closed out on the bar. You are too good to double. But if your position deteriorates, perhaps you should double your opponent out. You should usually wait and see what happens. While your opponent still has a man on the bar, your position will usually deteriorate only slowly. Until an immediate danger looms ahead of you, keep the cube.

## The Worst-Case Assumption and the 70 Percent Rule

Let us assume, pessimistically and unrealistically, that our opponent can use the cube with perfect efficiency and that we can use the cube with perfect inefficiency. This means that we can only double our opponent out when we have a certain win anyway, without the cube turn; while, in contrast, our opponent can double us out whenever *r* drops to 0.25, which will occur with probability

1 − r |

3/4 |

*r*fell to 0.25, the diminution of our winning probability if we redouble is 1⁄4 of the probability of our getting doubled out, or (1 −

*r*)/3. Thus

*u*=

*r*−

1 − r |

3 |

4r − 1 |

3 |

*r*− 1. But by redoubling now, we attain an equity of

*u*− 2 =

16r − 4 − 6 |

3 |

*r*must rise before we can redouble. From

*r*− 1 =

16r − 4 − 6 |

3 |

*r*− 3 = 16

*r*− 10, or 7 = 10

*r*,

*r*= 0.70.

Thus 70% seems to be a rock-bottom minimum percentage for redoubling prior to the final decisive shake of the game. You should not redouble with poorer than 70% chances unless you know the cube to be crippled for your opponent. Only initial doubles should be offered with poorer than 70% chances, and even then, perhaps only a few percentage points poorer.

## Limitations of the Models

In the real world, we can use mathematical models to tell us how favorable our chances should be to pass or take. But the proper time for doubling and redoubling is dictated primarily by psychological considerations. Of course, on last-shake and virtual last-shake decisions, we can use simpler rules: We should turn the cube whenever we are the favorite! Likewise we may make mathematically-based cube turns in easily calculable end positions. Even in these cases, however, cube actions in tournament matches offer frequent exceptions, depending substantially on the state of the match and the relative skill levels of the opponents.

In all other doubling or redoubling situations, our opponent’s degree of liberalism in accepting the cube should guide us. An overliberal opponent should induce us to delay doubling until the last moment when he will still take. An overconservative opponent should be doubled out at the earliest opportunity. And if we are in the box in a chouette against a mixture of opponents, we should turn the cube as soon as we feel confident of a mixed response from the crew, some passing and some taking.

Thus the mathematics of turning the cube can only guide us properly against opponents who always react properly when doubled, never taking overliberally or passing overconservatively. We can use it, essentially, only in head-to-head money play against strong opponents, or in the early stages of relatively even tournament matches.