David desJardins writes:
> THEOREM. Suppose that player A has rating ra, and player B has rating
> rb. Set a "target rating" rt for player A. Then there is a finite
> number N, depending on ra, rb, and rt, such that, if player A wins N
> consecutive matches from player B, player A's rating will exceed rt.
It occurs to me that there's another interesting theorem that can be
proved:
THEOREM 2. Suppose that Player A and Player B play an infinite sequence
of matches, with the winner of each match determined at random (A wins
with probability P, where 0 < P < 1). Then, for any "target rating" rt,
A's rating will at some point exceed rt, with probability one.
What this implies is that arbitrarily high ratings will eventually be
achieved, with probability one, even without any collusion of any sort.
Of course, this will take a *really* long time, for high target ratings.
This result still only relies on basic properties of the FIBS rating
system. We need the following:
1. The winner of each match gains a positive number of rating points
which depends only on the ratings difference between the players.
The loser loses the same number of rating points.
2. The rating point gain for the winner is a strictly decreasing
function of the rating difference between the winner and the loser,
and goes to zero as the rating difference goes to infinity.
PROOF of THEOREM 2. (I'll only give a sketch, I can fill in more
details if anyone cares. All of this follows from properties 1 and 2.)
There is a pair of "equilibrium ratings" (ra,rb), such that if Player A
has rating ra and Player B has rating rb, then the expected change in
A's rating from the next match is zero; i.e.,
P * (A's gain for winning) - (1-P) * (B's gain for winning) = 0.
If A's rating exceeds ra, then the expected change in A's rating from
the next match is negative. If A's rating is below ra, then the
expected change in A's rating from the next match is positive.
It follows that, infinitely often, A's rating will be at or above ra.
[This follows from standard results about random walks; it's
intuititively obvious, but also the hardest part of the proof.]
Consider the target rating rt, which corresponds to the rating
difference 2*rt-ra-rb. As in the proof of Theorem 1, let rg be the
rating point gain for A for winning when the rating difference is
2*rt-ra-rb. Then, for any smaller rating difference, A's gain for
winning exceeds rg. Let N = floor((rt-ra)/rg) + 1. Then if A wins N
consecutive games, starting with a rating at or above ra, A's rating
after the N wins will exceed rt.
The probability of such a sequence of wins is P^N, which is greater than
zero. Given an infinite sequence of occasions (times when A's rating is
at least ra) when such a sequence of wins might occur, with a positive
probability of occurrence, with probability one it will eventually
occur. QED.
David desJardins
|