The July issue of Scientific American features a cover story written by Martin A. Nowak called “Why We Help“. This very short article contains a brief review of Nowak’s “five rules” for cooperation, a little bit of connection to experimental work in real organisms, and some hazy conjecture concerning what makes humans cooperate. It seems as though every eight or so years an alarm rings at Scientific American headquarters and some editor is reminded to seek out an article on cooperation. Nowak is a favorite, having produced a number of previous articles (Nowak et al. 1995, Sigmund et al. 2002). Unfortunately, there is very little novel content here: previous articles have been more in depth — actually exploring and explaining the mechanisms that allow cooperation to evolve — but this one is more like a cheerleading session for Nowak’s scientific prowess and perspective.
The article starts off by explaining the prisoner’s dilemma, which Nowak has championed throughout his career as the simplest depiction of a social dilemma and has used as the basis for numerous investigations using computer simulations. The article then provides the reader with a repackaging of his “Five rules” paper (Nowak 2006), with a little bit of promotion for his book SuperCooperators thrown in for good measure. In this review he makes sure to highlight but not explain his objections to kin selection theory. The article also briefly describes public goods games and the tragedy of the commons.
Perhaps the most interesting section of this article is that in which Nowak tries to apply his brief introduction to humans. He opens the paper with the example of a single (meaning without a spouse or children) worker at the Fukashima nuclear power plant who basically sacrificed his future health by exposing himself to massive amounts of radiation so that the disaster at the plant could be mitigated. Clearly this is an act of rather strong sacrifice, not uncommon in human populations, that provokes us to provide an explanation. What is Nowak’s favored mechanism by which cooperation evolves in humans? Here he explains that it is indirect reciprocity, the extensive helping among unrelated humans that is aided by means of assigning and assessing reputation. This is a significant declaration, especially for someone who has been more open to group selection than others. But does this explain his paradigmatic introductory example? Not at all. If that worker at the Fukashima nuclear power plant has just sacrificed his ability to reproduce (and it would probably be a bad idea for him to have children), how can any reputational effect make up for this cost? Why employ this example if you cannot explain it? There are several evolutionary explanations for the behavior exhibited by this self-sacrificing worker:
- He is in some sense abnormal, a mutant whose behavior is likely to be purged from the population (this is the usual explanation for outliers, but is problematic given how persistent self-sacrificing behavior is in human populations);
- His behavior is the result of group selection, wherein those who found themselves in groups with more self-sacrificers in the past produced more offspring than those in groups with low rates of self-sacrifice (possible, although hard to make sense of in this context where the recipient group of this extraordinary altruism was so large and nebulous); or
- His behavior was motivated by cultural teaching and not so much his genetic propensities, and his example will inspire future generations to behave similarly when faced with analogous disasters (note that this is a gene-culture hybrid form of group selection).
Notice that the “reputational effects” required of indirect reciprocity play no real role in any of the explanations provided above.
I think that trying to provide evolutionary explanations for particular behaviors by particular humans is a bit of a fool’s errand, so what about explaining the larger landscape of human society? Nowak takes a stab at that as well. He provides as an example of a social dilemma climate change, a well-worn but valuable and important scnario where cooperation is required to avert population-wide disaster. Citing experimental economics work by Milinski et al. (2008), Nowak makes the claim that so long as we provide enough “authoritative information” and have our “reputation… on the line”, the Milinski paper suggests that we can avert dangerous climate change. Whoa, that is very different conclusion than the one that I came to! Nowak leaves out the most critical finding of this paper, which is that when loss due to climate change is likely to occur at rather low probabilities people are very unlikely to make the necessary sacrifices to prevent catastrophic climate change. And even under these extremely limiting assumptions of small groups working collectively to prevent climate change, the general outcome is insufficient sacrifice. It is unclear how reputation (which mostly has local effects unless it can be scaled up) will have any relevance to attempts to avert climate change.
As I have discussed before, Nowak gets into some danger zones when he extrapolates from his theoretical work. In this article he declares that cooperation is unstable, but optimistically suggests that “the altruistic spirit always seems to rebuild itself”. But how? Isn’t that the point of doing all this research into how cooperation evolves: to find out we might apply scientific insights to the preservation of all the value that cooperation creates? There is almost nothing in this article about how to maximize reputational effects as a means of preserving or fostering cooperation (unless you find the example of a gas bill that compares your consumption to your neighbor’s compelling). It is hard to imagine anyone new to the field coming away with any sense of why we help after reading this article. I must confess that I am bummed that this is the sort of cover article that our field produces.
What’s missing from this article? Well, to be frank, everything theoretical not involving Nowak, who has a tendency to portray progress in our field as having been advanced solely by his work. This tendency becomes most acute in this article, where he leads the reader to believe that all the significant theoretical discoveries have been made by him, that his computer algorithm uncovered the value of the “tit-for-tat” strategy, and that he “discovered” (rather than categorized) the five mechanisms that lead to cooperation. If I were a reader unfamiliar with the rich literature exploring how cooperation evolves, I really might come to the erroneous conclusion that Nowak is the Darwin of cooperation (a misimpression that will only be made worse by reading SuperCooperators). Nowak might be burnishing his reputation with the public with these sorts of articles, but that reputation is going to go in a very different direction within science if he keeps publishing works like this one.
If you are interested in understanding the basic premises of the prisoner’s dilemma, you may want to check out this narrative/interactive PDF, which is part of the Evolutionary Games Infographic Project.