Home

> urticator.net
  Search

  About This Site
> Domains
  Glue
  Stories

  Computers
  Driving
  Games
  Humor
  Law
> Math
  Numbers
  Science

> Game Theory Section
  A Theorem on Finite Abelian Groups
  Probability
  Miscellaneous

  Combinatorial Game Theory
  Game Theory
  The Prisoner's Dilemma
> Evolutionarily Stable Strategies

  The Shape of Strategy Space
  A Thought on Stability
> No Pure Strategy Is Stable

No Pure Strategy Is Stable

I learned something new about the prisoner's dilemma today. Here's how it happened. I was reading an essay on USS Clueless, and ran into the following sentence.

It turned out that the best anyone could find, and the best anyone has ever found, was known as "Tit-for-tat".

Now, I know about “tit for tat” (see Evolutionarily Stable Strategies), and am quite fond of it, but I thought that description was a bit strong. In fact, once or twice I'd heard that some strategy or other had been found to be superior in some context, but when I'd tracked the original papers down, I'd found them to be technically correct but not interesting. So, I knew the description was too strong, but I no longer had the references to prove it.

So, just for fun, I tried some searches on Google. That certainly led to some unusual parts of the web! I didn't find what I was looking for, exactly, but I did find one nice collection of related material, including a summary of and reference to a very interesting paper. The paper's title gets right to the point.

No pure strategy is evolutionarily stable in the repeated Prisoner's Dilemma game

(A pure strategy is one that doesn't include any randomness—see The Shape of Strategy Space for slightly more detail.)

I was curious, so I went and found the paper, and read it. It's short, only two pages, and is both correct and interesting. Here's how I'd summarize it.

First, any pure strategy is neutral to invasion by other strategies with identical behavior. The behavior can't be identical under all conditions, of course, or the other strategies wouldn't be different, but all that matters here is that the behavior be identical under the conditions that actually occur. If, for example, the pure strategy is nice, i.e., never the first to defect, then its behavior in response to a defection is never explored, and any other nice strategy can invade it—just as the strategy that always cooperates can invade “tit for tat”.

Then, after the pure strategy has been invaded by other strategies, suppose some mutants with different behavior appear. The original strategies all do equally well when they play each other; the only thing that distinguishes them is how well they do when they play the mutants … and, as it happens, one can always find mutants that will make the original pure strategy lose.

So there you have it—any evolutionarily stable strategy must include randomness. If you're playing against an ESS, who knows what it will do if you defect? It will do something, and you won't like it. (That's a slightly different kind of clarity than the one I discussed at the end of Foolish Consistency.)

Although the result is interesting, in practice I think the small effects due to playing against the mutants might well be outweighed by other things, like the small costs associated with using a more complex strategy.

Also, evolutionary stability isn't the be-all and end-all. That's the point of evolution, after all, that things change; why should strategies be exempt? That, in turn, reminds me of a sentence from The Tragedy of the Commons that I already quoted in another essay.

the morality of an act is a function of the state of the system at the time it is performed.

I'm not sure what strategies have to do with morality, but I like the comparison.

Finally, I should point out that the authors of the paper were responding to Axelrod, who thought he had proved that “tit for tat” was evolutionarily stable. Actually, he had proved it, but using a different definition of stability. I'm not concerned with the details; for me it's sufficient that the sequence of events described above is plausible.

Speaking of definitions, here's a short quote from The Evolution of Cooperation that I present in honor of A Beautiful Mind.

Those familiar with the concepts of game theory will recognize this definition of a collectively stable strategy as a strategy that is in Nash equilibrium with itself.

 

  See Also

@ May (2002)