Over the past week, we have learned about many examples of cooperative behaviour: pairs of Greater Anis come together in groups to breed; endophytic fungi mediate host-resistance to pathogens in plants; and, as several people have pointed out, sloths, moths, and algae can form a cooperative unit. The question I’m going to talk about in this post is how cooperative behaviour can evolve in populations of “selfish agents”. In biology, cooperation is defined as giving up reproductive potential to help another (Nowak 2006). Natural selection, without some type of driving mechanism, doesn’t allow for cooperation.

In 1984, the political scientist Axelrod ran computer simulations to find strategies where cooperating gives the best payoff. His stated problem is that which, “occurs when the pursuit of self-interest by each leads to a poor outcome for all.” (Axelrod, 1984) One such representation of this situation is the Prisoner’s Dilemma game invented in 1950. The game involves two prisoners (agents, players, etc.) who can either rat out the other person (defect) or keep silent about the other’s involvement in a crime (cooperate). Each person is completely ignorant about what the other person will do. The payoff matrix for this game looks something like this:

P l a y e r 2

P Cooperate Defect

l

a Cooperate R S

y

e Defect T P

r

1

where R, S, T, and P are the payoffs player 1 gets (player 2’s is just the transpose of this matrix) and satisfy

T > R > P > S.

The dilemma is then that if a player defects they get the highest payoff T, but it’s in the best interest of both players to cooperate because if the other defects they get a punishment payoff of P and P < T; that is, they do worse than if they both cooperated. So, Axelrod invited experts in game theory to submit computer programs which yielded cooperation as the winning strategy (i.e. highest payoff) for repeated iterations of the Prisoner’s Dilemma. He found that a strategy based on simple reciprocity, called tit-for-tat, has the best performance. It involves cooperating on the first move then doing whatever the other player does on the previous move. Nowak & Sigmund (1993) found that in fact a win-stay, lose-shift strategy outperforms tit-for-tat because it allows for mistakes.

Nowak discovered five mechanisms through which cooperation occurs in nature (Nowak 2006). These are:

1. Kin selection-natural selection can favour cooperation if there is genetic relatedness between the donor and the recipient.

2. Direct reciprocity-essentially win-stay, lose-shift which is best able to maintain a population of cooperators once cooperation is established.

3. Indirect reciprocity-the idea that establishing a good reputation (by helping others) will be rewarded by others.

4. Network reciprocity-a generalization of “spatial reciprocity” where cooperators can be maintained by developing network clusters where each helps the other.

5. Group selection-if there’s a group of cooperators and a group of defectors, both of which can split to make another group (whilst another group goes extinct to maintain the same population), the cooperating group splits faster.

I had the pleasure of having a one-on-one discussion with Martin Nowak while I was going to school in Boston. He directs the Program for Evolutionary Dynamics at Harvard University. He has written two books, which might be of interest, and are called Evolutionary Dynamics and Super Cooperators.