HOW CULTURE DROVE HUMAN EVOLUTION
A Conversation with Joseph Henrich
Part of my program of research is to convince people that they should stop distinguishing cultural and biological evolution as separate in that way. We want to think of it all as biological evolution.
JOSEPH HENRICH is an anthropologist and Professor of Psychology and Economics. He is the Canada Research Chair in Culture, Cognition and Coevolution at University of British Columbia.
[JOSEPH HENRICH:] The main questions I've been asking myself over the last couple years are broadly about how culture drove human evolution. Think back to when humans first got the capacity for cumulative cultural evolution—and by this I mean the ability for ideas to accumulate over generations, to get an increasingly complex tool starting from something simple. One generation adds a few things to it, the next generation adds a few more things, and the next generation, until it's so complex that no one in the first generation could have invented it.
This was a really important line in human evolution, and we've begun to pursue this idea called the cultural brain hypothesis—this is the idea that the real driver in the expansion of human brains was this growing cumulative body of cultural information, so that what our brains increasingly got good at was the ability to acquire information, store, process and retransmit this non genetic body of information.
The two systems begin interacting over time, and the most important selection pressures over the course of human evolution are the things that culture creates—like tools. Compared to chimpanzees, we have high levels of manual dexterity. We're good at throwing objects. We can thread a needle. There are aspects of our brain that seem to be consistent with that as being an innate ability, but tools and artifacts (the kinds of things that one finds useful to throw or finds useful to manipulate) are themselves products of cultural evolution.
Another example here is fire and cooking. Richard Wrangham, for example, has argued that fire and cooking have been important selection pressures, but what often gets overlooked in understanding fire and cooking is that they're culturally transmitted—we're terrible at making fires actually. We have no innate fire-making ability. But once you got this idea for cooking and making fires to be culturally transmitted, then it created a whole new selection pressure that made our stomachs smaller, our teeth smaller, our gapes or holdings of our mouth smaller, it altered the length of our intestines. It had a whole bunch of downstream effects.
Another area that we've worked on is social status. Early work on human status just took humans to have a kind of status that stems from non-human status. Chimps, other primates, have dominant status. The assumption for a long time was that status in humans was just a kind of human version of this dominant status, but if you apply this gene-culture co-evolutionary thinking, the idea that culture is one of the major selection pressures in human evolution, you come up with this idea that there might be a second kind of status. We call this status prestige.
This is the kind of status you get from being particularly knowledgeable or skilled in an area, and the reason it's a kind of status is because once animals, humans in this case, can learn from each other, they can possess resources.
You have information resources that can be tapped, and then you want to isolate the members of your group who are most likely to have a lot of this resources, meaning a lot of the knowledge or information that could be useful to you in the future. This causes you to focus on those individuals, differentially attend to them, preferentially listen to them and give them deference in exchange for knowledge that you get back, for copying opportunities in the future.
From this we've argued that humans have two separate kinds of status, dominance and prestige, and these have quite different ethologies. Dominance [ethology] is about physical posture, of size (large expanded chest the way you'd see in apes). Subordinates in dominance hierarchies are afraid. They back away. They look away, where as prestige hierarchies are quite the opposite.
You're attracted to prestigious individuals. You want to be near them. You want to look at them, watch them, listen to them, and interact with them. We've done a bunch of experimental work here at UBC and shown that that pattern is consistent, and it leads to more imitation. There may be even specific hormonal profiles with the two kinds of status.
I've also been trying to think broadly, and some of the big questions are, exactly when did this body of cumulative cultural evolution get started? Lately I've been pursuing the idea that it may have started early: at the origins of the genus, 1.8 million years ago when Homo habilis
or Homo erectus
first begins to emerge in Africa.
Typically, people thinking about human evolution have approached this as a two-part puzzle, as if there was a long period of genetic evolution until either 10,000 years ago or 40,000 years ago, depending on who you're reading, and then only after that did culture matter, and often little or no consideration given to a long period of interaction between genes and culture.
Of course, the evidence available in the Paleolithic record is pretty sparse, so another possibility is that it emerged about 800,000 years ago. One theoretical reason to think that that might be an important time to emerge is that there's theoretical models that show that culture, our ability to learn from others, is an adaptation to fluctuating environments. If you look at the paleo-climatic record, you can see that the environment starts to fluctuate a lot starting about 900,000 years ago and going to about six or five hundred thousand years ago.
This would have created a selection pressure for lots of cultural learning for lots of focusing on other members of your group, and taking advantage of that cumulative body of non-genetic knowledge.
Another signature of cultural learning is regional differentiation and material culture, and you see that by about 400,000 years ago. So, you could have a kind of late emergence at 400,000 years ago. A middle guess would be 800,000 years ago based on the climate, and then the early guess would be, say, the origin of genus, 1.8 million years ago.
Along these same lines, I've been trying to figure out what the ancestral ape would have looked like. We know that humans share a common ancestry with chimpanzees about five or six million years ago with chimpanzees and bonobos, and the question is, what kind of ape was that?
One possibility, and the typical assumption, is that the ape was more like a chimpanzee or a bonobo. But there's another possibility that it was a different kind of ape that we don't have in the modern world: a communal breeding ape that lives in family units rather than the kind of fission fusion you might see in chimpanzees, and that actually chimpanzees and bonobos took a separate turn, and that lineage eventually went to humans spurred off a whole bunch of different kinds of apes. In the Pliocene, we see lots of different kinds of apes in terms of different species of Australopithecus
I'm just beginning to get into that, and I haven't gotten very far, but I do have this strong sense that we now have evidence to suggest that humans were communal breeders, so that we lived in family groups maybe somewhat similar to the way gorillas live in family groups, and that this is a much better environment for the evolution of capacities for culture than typical in the chimpanzee model, because for cultural learning to really take off, you need more than one model.
You want a number of individuals in your social environment to be trying out different techniques—say different techniques for getting nuts or for finding food or for tracking animals. Then you need to pay attention to them so you can take advantage of the variation between them. If there's one member of your group who's doing it a little bit better, you preferentially learn from them, and then the next generation gets the best technique from the previous generation.
Other things I've been thinking about along these lines are just trying to think through all the different adaptations that would have resulted from this gene culture interaction. One thing that's been noted by a number of people is that humans are strangely good at long distance running. We seem to have long distance running adaptations.
Our feet have a particular anatomy. We have sweat glands and we can run really far. Hunter-gatherers can chase down game by just running the antelope down until it collapses. We run marathons. We seem generally attracted to running, and the question is, how did we become such long distance runners?
We don't see this in other kinds of animals. We think if it was an obvious adaptation, we'd see it recurring through nature, but only humans have it. The secret is that humans who don't know how to track animals, can't run them down, so you need to have a large body of tracking knowledge that allows you to interpret spoors and identify individual animals and track animals over long distances when you can't see the animal, and without that body of knowledge, we're not very good at running game down.
There's an interaction between genes and culture. First you have to get the culturally transmitted knowledge about animal behavior and tracking and spoor knowledge and the ability to identify individuals, which is something you need to practice, and only after that can you begin to take advantage of long distance running techniques and being able to run animals down.
That's a potential source for figuring out the origins of capacities for culture, because to the degree that we have information about the anatomy of feet, we can use that to figure out when it started. The same idea follows from cooking and fire. Since we know that those are culturally transmitted now, when we begin to see evidence that that affected our anatomy, that gives us clues to the origins of our capacities for culture.
Most recently I've been also thinking about the evolution of societal complexity. This is the emergence of complex societies that happens after the origins of agriculture, when societies begin to get big and complex and you have lots of interactions among strangers, large-scale cooperation, market exchange, militaries, division of labor, substantial division of labor. We have a sense of the sequence of events, but we don't have good process descriptions of how it was. What are the causal processes that bring these things about?
One of the ideas I've been pursuing is that after the origins of agriculture, there was an intense period that continues today of intergroup competition, which favors groups who have social norms and institutions that can more effectively expand the group while maintaining internal harmony, leading to the benefits of exchange, of the ability to maintain markets, of division of labor and of higher levels of cooperation. Then you get intense competition amongst the early farming groups, and this is going to favor those groups who have the abilities to expand.
You need to be precise about what you mean by these cultural traits and norms. I've worked in a couple of different areas on this, and one is religion. We just got a big grant to study the cultural evolution of religion with the idea being that the religions of modern societies are quite different than the religions we see in hunter gatherers and small scale societies, because they've been shaped by this process over millennia, and specifically they've been shaped in ways that galvanize cooperation in larger groups and sustained cooperation amongst non relatives.
The emergence of high-moralizing gods is an important example of this. In small-scale hunter-gatherer religions, the gods are typically whimsical. They're amoral. They're not concerned with your sexual behavior or your social behavior. Often you'll make bargains with them, but as we begin to move to the religions in more complex societies, we find that the gods are increasingly moralizing. They're concerned about exactly the kinds of things that are going to be a problem for running a large-scale society, like how you treat other members of your religious group or your ethnic group.
Experiments run at UBC and elsewhere have shown that when you remind atheists, it doesn't matter, but if you remind believers of their god, believers cheat less, and they're more pro social or fair in exchange tasks, and the kinds of exchange tasks that they're more pro social in are the ones with anonymous others, or strangers.
These are the kinds of things you need to make a market run to have a successful division of labor. We've been pursuing that hypothesis and, in fact, we've just sent a number of psychologists and anthropologists to the field, and we'll be doing more of that in the coming years to do these kinds of experiments in a diverse range of societies, seeing if the moralizing gods of a variety of religions create these same kinds of effects.
We also think that ritual plays a role in this in that rituals seem to be sets of practices engineered by cultural evolution to be effective at transmitting belief and transmitting faith. By attending a ritual, you elevate the degree of belief in the high-moralizing gods or the priests of the religion by the ritual practice. If you break down rituals common in many religions, they put the words in the mouths of a prestigious member of the group, someone everyone respects. That makes it more likely to transmit and be believed.
People also engage in what we call credibility-enhancing displays [during rituals]. These are costly things. It might be an animal sacrifice or the giving of a large sum of money or some kind of painful initiation rite like circumcision, which one would only engage in if one actually believed in it. It's a demonstration of true belief, which then makes the observers more likely to acquire the belief.
Speaking in unison, large congregations saying the same thing, this all taps our capacity for conformist transmission; the fact that we weight what everybody believes in deciding in what we believe.
These seem to want to tap our cultural transmission abilities to deepen the faith, and one of the interesting kind of ways that this has developed is that high-moralizing gods will often require rituals of this kind, and then by forcing people to routinely do the rituals, they then guarantee that the next generation acquires a deepened faith in the god, and then the whole thing perpetuates itself. It creates a self-perpetuating cycle.
We think religions are just one element, one way in which culture has figured out ways to expand the sphere of cooperation and allow markets to form and people to exchange and to maintain the substantial division of labor.
One of the interesting things about the division of labor is that you're not going to specialize in a particular trade—maybe you make steel plows—unless you know that there are other people who are specializing in other kinds of trades which you need—say food or say materials for making housing, and you have to be confident that you can trade with them or exchange with them and get the other things you need.
There's a lot of risk in developing specialization because you have to be confident that there's a market there that you can engage with. Whereas if you're a generalist and you do a little bit of farming, a little bit of manufacturing, then you're much less reliant on the market.
Markets require a great deal of trust and a great deal of cooperation to work. Sometimes you get the impression from economics that markets are for self-interested individuals. They're actually the opposite. Self-interested individuals don't specialize, and they don't take it [to market], because there's all this trust and fairness that are required to make markets run with impersonal others.
In developing this line of thought, one of the things you need to be clear about is what you mean by culture and culture evolution. Culture is one of those terms that has lots of different meanings, and people have used it lots of different ways. In the intellectual tradition that I'm building on, culture is information stored in people's heads that gets there by some kind of social learning—so imitation, teaching, any kind of observational learning.
We tend to think of cultural transmission, or at least many people think of cultural transmission as relying on language, but that's in part because in our culture, especially among academics, there tends to be a lot of talking, but in lots of small-scale societies, it's quite clear that there is a ton of cultural transmission that is just strictly by observational learning.
If you're trying to make a tool, you're mostly watching the physical movements of the hands and the strategies taken. You might get tips that are transmitted verbally as you go along. In building a house, you're looking at how the house is built together, again with verbal comments as supplements to getting a sense for how the house goes together.
If you're copying how to shoot an arrow, you're watching body position and bow position and aiming, and you're not listening to a lot of exposition on it, although clearly the verbal part of the transmission helps. We think and there's experimental evidence that show you can transmit lots of stuff without using any words.
This is information stored in people's brains, and when we look at other animals, we find that the evolutionary models of culture make really good predictions about culture in fish. Fish will learn food foraging preferences from each other, and non-human primates can learn from each other, but what we don't see amongst other animals is cumulative cultural evolution. The case in which the cultural transmission is high enough fidelity that you can learn one thing from one generation, and that begins to accumulate in subsequent generations.
One possible exception to that is bird song. Bird songs accumulate in such that birds from large continents have more complex songs than birds from islands. It turns out humans from smaller islands have less complex material culture than humans from larger islands, at least until recently, until communication was opened up. One of the interesting lines of research that's come out of this recognition is the importance of population size and the interconnectedness for technology.
I began this investigation by looking at a case study in Tasmania. Tasmania's an island off the coast of Southern Victoria in Australia and the archeological record is really interesting in Tasmania. Up until about 10,000 years ago, 12,000 years ago, the archeology of Tasmania looks the same as Australia. It seems to be moving along together. It's getting a bit more complex over time, and then suddenly after 10,000 years ago, it takes a downturn. It becomes less complex.
The ability to make fire is probably lost. Bone tools are lost. Fishing is lost. Boats are probably lost. Meanwhile, things move along just fine back on the continent, so there's this kind of divergence, and one thing nice about this experiment is that there's good reason to believe that peoples were genetically the same.
You start out with two genetically well-intermixed peoples. Tasmania's actually connected to mainland Australia so it's just a peninsula. Then about 10,000 years ago, the environment changes, it gets warmer and the Bass Strait floods, so this cuts off Tasmania from the rest of Australia, and it's at that point that they begin to have this technological downturn.
You can show that this is the kind of thing you'd expect if societies are like brains in the sense that they store information as a group and that when someone learns, they're learning from the most successful member, and that information is being passed from different communities, and the larger the population, the more different minds you have working on the problem.
If your number of minds working on the problem gets small enough, you can actually begin to lose information. There's a steady state level of information that depends on the size of your population and the interconnectedness. It also depends on the innovativeness of your individuals, but that has a relatively small effect compared to the effect of being well interconnected and having a large population.
There have been a number of tests of this recently, the best of which is this study by Rob Boyd and Michelle Kline in which they took the fishing technologies of different Oceanic islands from the time when Europeans first arrived, and they looked at how the population size of the island relates to the tool complexity, and larger islands had much bigger and more complex fishing technologies, and you can even show an effective contact. Some of the islands were in more or less contact with each other, and when you include that, you get the size effect, but you also get a contact effect, and the prediction is that if you're more in contact, you have fancier tools, and that seems to hold up.
在这方面，最近已经有了很多测试，其中最好的当属Rob Boyd和Michelle Kline所做的研究。他们研究了自欧洲人初次抵达以后大洋洲不同岛屿上的捕鱼技术，考察了岛上人口规模如何影响渔具的复杂度，结果发现更大的岛屿拥有更大型、更复杂的捕鱼技术。有效接触也会发挥作用。其中某些岛屿跟其他岛屿之间存在或多或少的接触，如果把这个考虑在内，就既能发现规模效应，又能发现接触效应，理论上的预测是，更多的接触就意味着更好的渔具，这似乎也得到了验证。
If you follow this idea a little bit further, then it does give you a sense that rates of innovation should continue to increase, especially with the emergence of communication technologies, because these allow ideas to flow very rapidly from place to place.
An important thing to remember is that there's always an incentive to hide your information. As an individual inventor or company, you're best off if everybody else shares their ideas but you don't share your ideas because then you get to keep your good ideas, and nobody else gets exposed to them, and you get to use their good ideas, so you get to do more recombination.
Embedded in this whole information-sharing thing is a constant cooperative dilemma in which individuals have to be willing to share for the good of the group. They don't have to explicitly know it's for the good of the group, but the idea that a norm of information sharing is a really good norm to have because it helps everybody do better because we share more ideas, get more recombination of ideas.
I've done a lot of work on marriage systems with the evolution of monogamy. We have a sort of human nature that pushes us towards polygyny whenever there are sufficient resources. Eighty-five percent of human societies have allowed men to have more than one wife, and very few societies have adopted polyandry which would be the flip side of this, and then there's actually a number of societies that allowed both, but they tended to be polygynous because, assuming you have enough resources, the men are going to be more interested in having more wives than the wives are interested in having more husbands, and the husbands aren't inclined to be second husbands as much as the women are willing to be second wives.
But in the modern world, of course, monogamy is normative, and people who have too many wives are thought poorly of by the larger society. The question is, how did this ever get in place? And of course, it traces back through Europe.
One of the things that distinguished Europe from the rest of the world was something called the European Marriage Pattern, and part of that was normative monogamy, the idea that taking a second wife was wrong as long as you still had the first wife, and this actually traces back to Rome and eventually to Athens. Athens legislates the first rules about monogamous marriage just before the Classical period.
This was an example of a case where people are ready to moralize it, and I like to view it as the evolution of this marriage system of monogamy. It's peculiar. It doesn't fit with what we know about human nature, but it does seem to have societal level benefits. It reduces male-male competition.
We think there's evidence to say it reduces crime, reduces substance abuse, and it also engages males in ways that cause them to discount the future less and engage in productive activities rather than taking a lot of risks which include crime and other things. Depending on what your value systems are, if you think freedom is really important, then you might be for polygyny, but if you want to trade freedom off against other social ills like high crime, then you might favor the laws that prohibit polygamy.
When I talk about success and un-success, I don't mean anything moralizing. I'm talking about the cultural evolutionary processes that favor the spread of one idea over another. If I talk about normative monogamy as being successful, I mean that it spread, and in this case the idea is that it spread despite the fact that it's contrary to some aspects of human nature. It does harness our pair bonding in some aspects, so it's a complex story there, but it creates societal level benefits.
Societies that have this are better able to maintain a harmonious population, increase trade and exchange, and have economic growth more than societies that allow polygamy, especially if you have a society with widely varying amounts of wealth, especially among males. Then you're going to have a situation that would normally promote high levels of polygyny.
The absolute levels of wealth difference of, say, between Bill Gates and Donald Trump and the billionaires of the world, and the men at the bottom end of the spectrum is much larger than it's ever been in human history, and that includes kings and emperors and things like that in terms of total control of absolute wealth.
Males will be males in the sense that they'll try to obtain extra matings, but the billionaires are completely curbed in terms of what they would do if they could do what emperors have done throughout the ages. They have harems and stuff like that. Norms of modern society prevent that.
Otherwise, there would be massive male-male competition, and even to get into the mating and marriage market you would have to have a high level of wealth if we were to let nature take it's course as it did in the earliest empires. It depends on what your views are about freedom versus societal level benefits.
Part of my program of research is to convince people that they should stop distinguishing cultural and biological evolution as separate in that way. We want to think of it all as biological evolution.
We want to distinguish genetic evolution and cultural evolution, and then at some point we may have epigenetic evolution, and there are other kinds of inheritance systems.
It's going to be a little bit more of a complex story. Culture is part of our biology. We now have the neuroscience to say that culture's in our brain, so if you compare people from different societies, they have different brains. Culture is deep in our biology.
We have people with different cultural backgrounds that have different hormonal reactions as well as having different brains on the MRI scan. So culture is just part of our biology, and we shouldn't take this dualistic view that there's this realm of ideas that somehow are separate from this realm of biology, and you're either talking about the realm of ideas or the realm of biology.
Cognition and our ability to think are all interwoven, and we're a cultural species, which means one of our genetic programs is to be able to acquire ideas, beliefs and values and weave them into our brain such that they then affect our biology. A good example of this is the placebos.
Placebos are something that depend on your cultural beliefs. If you believe that something will work, then when you take it, like you take an aspirin or you take a placebo for an aspirin, it initiates the same pathways as the chemically active substance.
Placebos are chemically inert but biologically active, and it's completely dependent on your cultural beliefs. If you don't believe that cures come in pills, then taking a placebo aspirin does not have any effect on you. That's a case where it shows the ability of a cultural belief to activate biological processes, and then it's something we know a little bit about.
One of the large research projects that I run in an effort to understand human sociality is called The Root of Human Sociality Project. In the mid '90s I was working in the Peruvian Amazon and I was working with a group called the Machiguenga. Traditionally, the Machiguenga lived in single-family units scattered throughout the forest. I had been exposed through my advisor, Rob Boyd, at the time to something called the Ultimatum Game, and the Ultimatum Game seemed to provide evidence that humans were innately inclined to punish unfairness.
In the Ultimatum Game, two players are allotted a sum of money, say $100, and the first player can offer a portion of this $100 to the second player who can either accept or reject. If the second player accepts, they get the amount of the money, and the first player gets the remainder. If they reject, both players get zero.
Just to give you an example, suppose the money is $100, and the first player offers $10 out of the $100 to the second player. If the second player accepts, he gets the $10 and the first player gets $90. If he rejects, both players go home with zero. If you place yourself in the shoes of the second player, then you should be inclined to accept any amount of money if you just care about making money.
Now, if he offers you zero, you have the choice between zero and zero, so it's ambiguous what you should do. But assuming it's a positive amount, so $10, you should accept the $10, go home with $10 and let the other guy go home with $90.
But in experiments with undergraduates, Western undergraduates, going back to 1982, behavioral economists find that students give about half, sometimes a little bit less than half, and people are inclined to reject offers below about 30 percent.
Subsequent work with non-student adults in the West show that it's an even a stronger result. The older you get, even if you have more wealth and more income, you're especially inclined to only offer half, and you'll reject offers below 40 percent.
In 1995, it had been done in a number of different countries, and it seemed to be robust. I was thinking that the Machiguenga would be a good test of this, because if they also showed this willingness to reject and to make equal offers, it would really demonstrate the innateness of this finding, because they don't have any higher level institutions, and it would be hard to make a kind of cultural argument that they were bringing something into the experiment that was causing this behavior.
I went and I did it in 1995 and 1996 there, and what I found amongst the Machiguenga was that they were completely unwilling to reject, and they thought it was silly. Why would anyone ever reject? They would almost explain the subgame perfect equilibrium, the solution that the economists use, back to me by saying, "Well, why would anybody ever reject? You lose money then." And they made low offers, the modal offer was 15 percent instead of 50, and the mean comes out to be about 25 percent.
Rob Boyd then was my advisor, and we went to the MacArthur Foundation for some funding, and they funded us, and we were able to put together a team of anthropologists. We brought them to UCLA. We had some economists there, including Ernst Fehr, Sam Bowles, and Herb Gintis, and we taught them some game theory.
Rob Boyd那时候是我导师，我们跑去麦克阿瑟基金会要资助，他们资助了我们。我们由此得以组建了一个人类学家团队，把他们带到加州大学洛杉矶分校。我们在那还有一批经济学家，包括Ernst Fehr, Sam Bowles和Herb Gintis，我们就教了他们一些博弈论。
There was large discussion about methods, about whether we could actually pull this off, and then over the next two summers these field anthropologists went to the field and conducted the ultimatum game as well as a few other games—not systemically across the societies— but it gave us insight that we would then later use, and what we found is that societies vary dramatically, from societies that would never reject, to societies that would even reject offers above 50 percent, and we found that mean offers ranged across societies from about 25 percent to even over 50 percent. We had some of what we called hyper fair societies. The highest was 57 percent in Lamalera, Indonesia.
We found we were able to explain a lot of the variation in these offers with two variables. One was the degree of market integration. More market-integrated societies offered more, and less market integrated societies offered less. But also, there seemed to be other institutions, institutions of cooperative hunting seemed to influence offers. Societies with more cooperative institutions offered more, and these were independent effects.
This then led to a subsequent project where we measured market integration much more carefully along with a large number of other variables, including wealth, income, education, community size, and also religion. We did the Ultimatum Game along with two other experiments. The two other experiments were the Dictator Game (the Dictator Game is like the Ultimatum Game except the second player doesn't have the option to reject) and the Third Party Punishment Game.
In the Third Party Punishment Game, there are three players and the first two players play a Dictator Game. They're allotted a sum of money, say $100, and the first player can offer any portion of the $100 to the second player, player B. Now, player B in this game can't do anything, and they just get whatever they're offered. But there is a third player, player C, and player C is given half the amount that A and B are dividing up, and he can use some of his money (20 percent of it actually) to pay to take money away from A at three times the rate. If he's given $50, he can use $10 of it to take $30 away from player A. Suppose player A gives only $10 to player B and keeps $90 for himself, then player B will go home with $10. Now, player C can pay $10, so he goes home with $40 instead of $50 in order to take $30 away from player A. Player A would go home with $60 instead of $90, because he got punished. Player B goes home with $10, and player C goes home with $40 instead of $50 because he chose to punish.
This gives us two different measures of willingness to punish strangers, ephemeral interactions—people that you don't know and won't see again. In the experiment, one is rejection in the Ultimatum Game, and then this Third Party Punishment measure, and it gives us three measures of fairness in this kind of transaction.
It gives us offers in all three games and what we found there is that market integration again predicts higher offers in all three games, and size of the community predicts willingness to punish and this fits with a lot of theoretical work, suggesting that if you have small communities, you don't need punishment. You don't need costly punishment. You need some kind of sanctioning system to keep people in line, but you're probably not going to do it with single individuals punishing. You have some other mechanism. It could be some kind of reputational mechanism like if they don't cooperate in this situation, then you won't interact with them in some other situation. It's a withdrawal of interaction rather than direct punishment. There's a number of different ways to create norm systems that operate like that.
In a big society punishment can be most effective because reputational mechanisms can be weak. If you're in a big society and you encounter somebody, you probably don't have friends in common through which you could pass reputational information for which punishment could be generated. You might want to punish them right on the spot or someone who observes the interaction might want to punish them right on the spot or call the authorities or whatever, which is also costly.
This creates a puzzle because typically people think of small-scale kinds of societies, where you study hunter-gatherers and horticultural scattered across the globe (ranging from New Guinea to Siberia to Africa) as being very pro social and cooperative. This is true, but the thing is those are based on local norms for cooperation with kin and local interactions in certain kinds of circumstances.
Hunter-gatherers are famous for being great at food sharing, but these norms don't extend beyond food sharing. They certainly don't extend to ephemeral or strangers, and to make a large-scale society run you have to shift from investing in your local kin groups and your enduring relationships to being willing to pay to be fair to a stranger.
This is something that is subtle, and what people have trouble grasping is that if you're going to be fair to a stranger, then you're taking money away from your family. In the case of these dictator games, in order to give 50 percent to this other unknown person, it meant you were going home with less money, and that meant your family was going to have less money, and your kids would have less money. To observe modern institutions, to not hire your brother-in-law when you get a fancy job or you get elected to an office is to hurt your family. Your brother-in-law doesn't have a job now. He has to have whatever other job he has, a less good job.
A commitment to something like anti-nepotism norms is something that runs against our evolutionary inclinations and our inclinations to help kin and to invest in long-term close relationships, but it's crucial for making a large-scale society run. Corruption, things like hiring your brother-in-law and feathering the nest of your close friends and relatives is what really tears down and makes complex societies not work very well. In this sense, the norms of modern societies that make modern societies run now are at odds with at least some of our evolved instincts.
Lately we've been focused on the effects of religion. One of the things I didn't mention from the experimental games project is that in addition to market integration in the second project, we found independently that adherence to a world religion matters. People from world religions were willing to give more to the other person in the experiment, the anonymous stranger.
We've been using these experiments in the context of behavioral games. There's since been a number of additional papers coming out of economics showing the relationship between market integrations using measures like distance from market and people's willingness to build impartial institutions. Part of this is your willingness to acquire a norm of impartial roles; that we have a set of rules that governs this system.
Sometimes historians or political scientists call it the rule of law. We have an impartial set of rules that we're going to follow, and those rules apply independently of the identities and our emotional reactions towards the participants.
One of the things we find with the relationship between norms and these risk-managing institutions is that when you have risk managing institutions these impartial norms can spread. Otherwise, people are strongly biased towards maintaining these local relationships. If you want the rule of law to spread or to be maintained, you need conditions in which you're managing risk.