Trust in game theory
The second condition is something I like to imagine as efficiency. This signal is then interpreted by and reacted to by the other party. Honesty is the act of sending true signals, wherein the interpretation reflects the truth, which means both that you must not be lying even through facts , and that the other person must not misinterpret the signal. Under those circumstances, the signal must be mutually beneficial.
Note that this does not mean that both parties need to care about each other deeply. It ultimately has to do with co-ordination, and the extent of co-ordination in a given circumstance can be calculated by looking at whether the people are well meaning and efficient with regards to signals.
In some cases, two rational players should be honest, in others, two irrational players can attain better payoffs by choosing to be well meaning in order to reap the benefits of honesty. The latter case, combined with the efficiency of the former case is what trust is. You are commenting using your WordPress. You are commenting using your Google account. You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email.
Notify me of new posts via email. I can think of four motivations that explain these results:. How can we determine the importance of the last explanation in terms of the trust mechanism? The significance of each of these possible explanations might be investigated as follows:.
If sucker avoidance is part of the explanation for the increased cooperation, then one should find that when faced with a single choice between the rows of only the first column of the prisoner's dilemma payoff matrix, people choose 2,2 more frequently than they cooperate when playing an ordinary prisoner's dilemma and that when faced with a choice between the rows of the second column, they choose 0,3 less frequently than they choose to cooperate in the prisoner's dilemma.
I would hypothesize that only the second of these two consequences will be found in the laboratory. But there is still the issue of feeling like a sucker. One might get some evidence on the strength of this factor by exploring the consequences of labelling the 0,3 payoff as the sucker's payoff.
I've already commented on altruism as an explanation. If what explained cooperation were unconditional altruistic concern for the other, then the rates of cooperation in the simultaneous and sequential one-shot prisoner's dilemma would be just the same. But people can feel more kindly toward those who they believe to be kind, and despite suspicions that initial cooperators might be cynical tacticians rather than kind souls, altruism could be enhanced.
This version of the second explanation, like explanation 3 in terms of avoiding embarrassment can be tested by considering a 3-person game where B 's payoffs depend on how A plays, A 's payoffs depend on how F plays and F 's payoffs depend on how B plays.
The three payoffs are respectively to F , A and B. Defect regardless of what others do is still a dominant strategy for all of the players, but it doesn't matter to B 's payoff how F plays, to F 's payoff how A plays, or to A 's payoff how B plays. Suppose F moves first and plays "C" and then A and B play simultaneously. If what is driving B 's greater rate of cooperation in the two-person game when A opens by cooperating is a desire to be fair or not to look bad compared to the person who has cooperated, then one should find a comparable increase in the rate of cooperation here.
As far as I know, nobody has done the experiment. How can one determine how important the trust mechanism is in explaining the increased cooperation in the sequential prisoner's dilemma? The fact that many subjects in Wrightsman's experiment mentioned "trust" provides some prima facie reason to think that this mechanism might be significant. One thing that can be done easily is to consider the effect of making considerations of trust more or less salient.
For example, the game could be presented in extensive form with A 's choices labelled "trust" or "play it safe," and B 's choices in the circumstances in which A "trusts" as "fulfill the trust" or "betray the trust.
More interestingly, one could contrast the sequential game described above, in which A knows that her move will be common knowledge before B moves, with a different sequential game in which A is not told that B will know how she moved before he gets to move. After A moves, B is told her move and is told that A was not informed that her move would be announced to him before he moved. Reciprocal altruism should have a more pronounced effect than in the sequential game with common knowledge of A 's move, because A 's initial cooperation cannot be strategic.
Sucker avoidance should have exactly the same effect on B. The effect of shame avoidance can be held constant if B is told that A will learn later that B made his choice after knowing what A chose.
But without it being common knowledge that B knows what A chose before he chooses, there is no way for A to make a trusting overture to him. In this way I think one might be able to learn something about the comparative importance of the trust mechanism. I have some research funds and some friends with experience in economic experimentation, and I plan on carrying out the experiments sketched above.
I expect to confirm the results suggested by Wrightsman's study and to find that the level of cooperation of B in the sequential game without common knowledge in response to cooperation by A will be between the level of cooperation in the ordinary prisoner's dilemma and the level of cooperation in the sequential prisoner's dilemma with common knowledge.
I expect that making it possible for the trust mechanism to operate will make a significant difference to the results. Needless to say I may find out that these predictions are all wrong. However it turns out, such an investigation is likely to underestimate the importance of the trust mechanism, because of the controls that make it impossible for the actions of individuals to influence their reputations.
However necessary these controls are to make definite what game individuals are playing, they are problematic if one is investigating trust, for concerns about reputation are I conjecture among the most important motivations for trustworthy behavior.
Brownlow will think he has stolen his books and his money than at losing his opportunity for a better life. Keep me here all my life long; but pray, pray send them back.
He'll think I stole them; the old lady--all of them who were so kind to me will think I stole them. Oh, do have mercy upon me, and send them back!
To incorporate reputation, one will have to consider much more complicated games. Many more investigations can be carried out. How is the influence of trust affected by the size of the payoffs?
What would happen if the encounters although still among anonymous strangers were face-to-face? What effect do the culture and upbringing of the subject have? How powerful might examples of trust and distrust be? And so forth. But the results of even these primitive experiments could have implications for policy.
If the experiments show that manifesting trust matters significantly, they imply that one can facilitate cooperation by making it possible for it to be common knowledge that people are counting on one another. Citizens who know that others are counting on them and that the others acted as they did in part because they knew that their fellow citizens would know that they were counting on them can be motivated to reward this trust.
If opportunities for placing and rewarding trust are widely available, especially in contexts where the costs of misplaced trust are initially low, one can build a fabric of good will and mutual confidence that facilitates cooperation and that is a good thing in itself. As soon as I started it, I got hooked. Now I really look forward to it. Often, I'm the only person who stops by. If you could just see how grateful they are, you'd know why I've been doing this for two years" quoted in Rhoads , p.
This is not the place to settle the definition of altruism, but it is clear that there are powerful non-altruistic motivations at work here. There is a line between a prudent concern for reputation and a concern for the regard of others that goes beyond prudence, but I shall not attempt to draw that line here and shall in this way vulgarize Pettit's account. People seek wealth and commodities in part because they seek self-respect.
Very few seek self-respect because it will make them wealthier. I shall not, however, comment further on this interesting possibility. With either private or public property rights we are apparently unable to perceive how to manufacture such valuable commodities" McKean , p. One might find much higher rates of "altruism.
Furthermore, players who are allowed to see one another may interpret all sorts of cues as indicating that others are placing trust in them. A subject is "trusting" if "the subject chose C, expected the other person to choose C i. Any choice of D, or the choice of C with the expectation that the other would pick D was classified as distrusting, when the person gave as his reason distrust or fear of the other's response" , p.
It is impossible to tell what overall percentage of first movers cooperated, and it is not clear how players were classified who cooperated and expected the second player to cooperate but did not give the right reasons.
The subjects in Wrightsman's study are much nicer. Search this site. Trust in Game Theory Daniel M. People may misread, misspeak, etc. Odd beliefs. Subjects who believe in magic or who accept fallacious arguments may believe that their cooperating makes it more likely that their opponent will cooperate too.
Subjects might also envision future interactions with other subjects or with the experimenters and thus believe that they are playing a more complicated game with a more complicated structure than is represented here. Other motives. Subjects may care about other things in addition to their own monetary payoffs. They may be altruists. They may be governed by concerns about fairness. They may be concerned about what the experimenters will think of them.
Ken Binmore, for example, writes, The framework of game theory enables one to categorize the features that may influence preferences: The "material" payoffs. Even if, as I believe, people care about other things, too, most presumably care about the outcome for themselves. Altruistic, fair-minded, or malevolent players also care about the pay-offs to the other players. The strategies and rules. Players may care not only about how much money they wind up with, but about how they got it. Preferences may depend on both the set of permissible strategies and on what strategies are actually chosen.
Some of these preferences may be bizarre: faced with a choice between playing "left" or "right" an individual may choose "right" out of a horror of anything associated with communism.
But there may be system and rationale governing preferences for strategies, too. For example, as Peter Diamond pointed out, people may prefer to flip a coin to determine which of two individuals receives a benefit rather than directly choosing whom to benefit. The other players. Players may care about who their opponents are, what the other players prefer, what strategies the other players adopt, and what the other players believe about oneself. I have different expectations about how Mother Theresa and Margaret Thatcher will play, and I may care differently about what payoffs they receive, and faced with a choice between larger and smaller payoffs for both me and my opponent, I might choose the smaller to spite the other or to insure that my performance relative to theirs is better.
Recently Geanakoplos, Pearce and Stacchetti and Rabin have explored ways of making payoffs depend on beliefs about the other players. Concepts and initial expectations. Players may make surmises about what others will do on the basis of general social norms and expectations rather than on the basis of beliefs about the particular individuals they are playing, and they may respond to the moves of others very differently depending on what their initial expectations are.
In a prisoner's dilemma unlike the one above in which the payoff from mutual cooperation is only slightly less than the maximum payoff from defecting but in which it is very costly to be a sucker, rates of cooperation might be highly sensitive to expectations about what others will do. I can think of four motivations that explain these results: Sucker avoidance. If A plays "C", B no longer faces a risk of getting the 0 payoff, of losing to A , or of suffering the shame of being a sucker.
Fairness and reciprocal altruism. B wants to behave fairly and to reward kindness with kindness. After A plays C, it is unfair to play D, after she plays D, it is fair. A 's "cooperating" could lead B to think that she is a nice person and to feel more altruistic toward her but, as Matthew Rabin pointed out to me, B might worry that A is in fact the sort of person who would defect if she were in his place and that she is playing "C" strategically in order to elicit cooperation from him.
In the ordinary prisoner's dilemma, in which B does not know how A plays, none of these issues arise, and defection does not seem unfair.
Shame avoidance. B does not want to behave worse than A behaves. By defecting after she cooperates, he would be embarrassed, but there is nothing embarrassing about repaying defection with defection. Defecting in an ordinary prisoner's dilemma in contrast can be justified by fear that A would defect. B takes A 's cooperation as an announcement "I trust you" and is responding to this overture.
The significance of each of these possible explanations might be investigated as follows: 1. References Bacharach, Michael and Susan Hurley, eds. Becker, Lawrence. Binmore, Ken.
Playing Fair. Coleman, James. Cambridge: Harvard University Press. Dickens, Charles. Oliver Twist. New York: New American Library, Deutsch, Merton. Gambetta, Diego. Trust: Making and Breaking Cooperative Relations.
Oxford: Blackwell, Gilbert, Margaret. Good, David. You can have trust in your head; you can have cognitive trust, affective trust, feelings that somebody has integrity. The most important things are about behavior. There are debates in the field, and there will continue to be debates. And people will niggle and nitpick.
Keep up with the latest insights on trust Join Our Mailing List. One of the theories that we had was that if things are bad early, they may have a higher potential later on.
So, we actually tried to test the two, and we had interactions between people where there was a trust breach early or a trust breach after many cooperative choices. And it turned out that the early trust breaches were devastating and led to less future cooperation, less trust. When we saw breaches late, we thought that this might really, really damage things. And what happens with strength is we do respect it. And when someone is trustworthy and strong — i. However, if it was intentional, it tells you a lot about the person.
You might work with the other person; you might cooperate with them; you might have a beneficial relationship. You take the first step in whatever it is — sharing a secret, providing some information about a work project, or loaning money or a car. The question is, how vulnerable do you want to make yourself? The model tells you to be very cautious. And if you continuously get positive information, trust can continue to grow — again, to a limit.
In business situations, it truly pays to have many people you can trust a moderate to high amount and nobody you have to depend upon too much. It Depends How much do I actually want to loan my best friend? But when it starts to hurt me, can hurt me badly, do I really want to loan them , dollars? That would be tough.
Would you loan me ? Most of us are willing, with people we know, to take some risks. Trust and Leadership Applications. And what a lot of research suggests is every leader should also want to come across as warm — as interpersonally warm and caring. So, this nice combination of competence and warmth is dynamite for leaders. You have a good track record. I have high expectations from you. And what I want to have happen is for those teams to coordinate themselves well and trust each other.
Add on some training programs where their abilities increase — absolutely tremendous bottom-line impact over the long term. So, automatic trust is a situation where you get a cue that all of sudden leads you to be more trusting than you would be otherwise.
And there are lots of those cues that are possible. And we all know of them. And we have schemas for professors; we have schemas for nerds; we have schemas for CEOs.
And we have schemas for people who are likeable but not trustworthy and schemas for people who are trustworthy but not so likeable. And we accelerate a trust-development process to both of our mutual benefit.
0コメント