Playing for the Upper Hand
April, 1969
"I shall never believe," Albert Einstein once said, "that God plays dice with the world." It was necessary to say this because all the evidence of modern physics points the other way. The world behaves as if God does play dice with it. Relativity, quantum theory, the Heisenberg principle--all tore apart Newton's model of an orderly world with causes leading to uniform effects. Physicists, to live with disorder and uncertainty, had to turn to probability and statistics--just as the Weather Bureau gave up trying to make exact predictions and now settles for telling you that your picnic has a 30-percent chance of getting rained out.
There has been a vital spin-off from this development. As mathematicians kept improving the statistical, probabilistic tools for explaining things, scholars in other fields as far-flung from physics as economics and political science began adapting them to explain the operation of their own worlds. And after decades of being digested, filtered, translated and reworked, the approach is shaping up to where you can use it, too--not to form your own theory of the universe but to develop everyday strategies for problems as trivial as deciding what to order from a Chinese menu or as important as deciding whether to live in the city and get mugged or to move to the suburbs and die of boredom.
If God plays dice with the universe, then everyday life is a series of accidents, some good, some bad. What you can do about it is try to improve the odds for the good accidents and minimize the odds for the bad. In other words, roll your own dice.
This basic idea--that statistics and probability theory can be applied to human conflict and decision making under uncertainty--was first laid out in detail in 1944, when John von Neumann and Oskar Morgenstern published an unreadable book called Theory of Games and Economic Behavior. Because the book was unreadable and its mathematics so far out, the ideas in it were slow to catch on. It is only in recent years that a large body of literature has developed to show how game theory can be applied to real-world problems.
In using game theory, it is often necessary to use numbers. This need not be as alarming as it may sound to the nonmathematically inclined. Numbers are less frightening when you realize that they don't complicate. They simplify. They wipe the fuzzy edges from the world. And they give you a tool with which to manipulate the environment, instead of standing passively around while the environment manipulates you.
The first people to grasp the everyday applications were, as might be expected, the people in the think-tank business. Dan Ellsberg was one of them. As one of the bright young men of the Rand Corporation, he has the imagination to apply high-level theory to personal crisis. He proved it five years ago, during the big fire in Beverly Hills, while flames were sweeping through Brentwood, north of Sunset Boulevard. Ellsberg lived south of Sunset, and he and his neighbors were standing in front of their houses and watching the flames a half mile away. All that was needed to send the fire in their direction was a shift in the wind.
But would it shift? Should they evacuate and risk doing a lot of work for nothing, or stay and risk having their possessions burned?
"Everyone was very reluctant to evacuate," recalls Ellsberg. "They stood there and kept saying it wouldn't come across Sunset, as though the street were some sort of firebreak. The talk had a spiritual premise, really. It was as though God were destroying the people north of Sunset and it wouldn't be consistent to wipe out our side, too."
Ellsberg did not waste time on this talk. First, he calculated the probability that the wind would, indeed, shift, basing it on his own knowledge of the fickleness of the weather in that location. Then he estimated the value of his goods and the cost of moving them to locations that were various distances away. It was a two-person game, with Ellsberg rolling the dice on one side and God on the other, and his equation demonstrated that moving his household goods completely out of the danger zone would not be worth the trouble. However, it would be worth while to move them a short distance away. So he did.
The fire did not reach his house and the work of moving turned out to be wasted. But Ellsberg went to sleep that night--as he would have, no matter which way the fire went--with the comforting knowledge that he had not made an avoidable mistake.
His method was to measure an expected cost against an expected return. The idea of mathematical expectancy is simple enough. When you bet at the race track in Florida, the state and the track take 15 percent off the top. In the parimutuel system, you are betting against all the other participants with the remaining 85 percent. If everyone bets at random or if all bettors are equally skilled, your expected return is 85 cents for each dollar spent. In game theory, it simplifies matters to assume that an expected return is always equal in usefulness to a fixed return--as, indeed, it is in real life, when you can play the game often enough for deviations from the expected to average out.
Properly applied, mathematical expectancy is a useful concept in making decisions, even when the decisions must take into account the actions of other people. This was brought home quite clearly to a junior officer of Harvard University one winter day, when he and his date were driving on U. S. 128 west of Boston. They spotted a stranded motorist with the hood of his car and up and a red sign displaying the plea Send Help.
"We'll stop at the next phone booth and call the highway patrol," said the Harvard man.
"Never mind," said the girl. "Somebody else will stop."
"No," he replied. "Everyone will say that somebody else will stop and, therefore, no one will stop, and so we must stop."
"Don't be a nut," she said. "Everyone will say, 'Everyone will say that somebody else will stop and, therefore, no one will stop, and so we must stop. And since they're going to stop, why should we? In fact, if we do stop, we will just add to the confusion."
"Confusion?"
"Yes. The highway-patrol switchboard could get jammed with so many calls," she said, triumphantly. "And we, by contributing to the jamming, would keep help from reaching that poor man back there."
If he could have counted on his fingers, the Harvard man might have carried this argument through one more reversal before losing it. But he needed his hands to drive, and the girl did have a point. If everyone who had an impulse to call did call, it would be both wasteful and inefficient. They had an obligation, but it lay at some point on a continuum between calling and not calling--between one call and zero calls. How to calculate it? He found a way.
A check of the visible traffic led to the estimate that 300 cars an hour had an opportunity to observe the stranded motorist. Perhaps 80 percent of these drivers would feel some concern. They, too, would be torn with indecision and perhaps half would come out with a feeling of obligation to call. Half of these might find an excuse not to. This left a net expectancy of 60 calls.
Sixty calls was clearly an inefficient number to rescue one stranded motorist. They could jam the switchboard and keep other emergency calls from getting through. No one would want that on his conscience, any more than he would want the stranded motorist to be neglected. Six calls, not 60, would be better-- enough to ensure that the message would get through but not enough to cause needless confusion. (The danger of confusion is real. If you have ever reported an accident on a busy highway, chances are you experienced a frustrating conversation with the dispatcher while he tried to decide whether the accident you had seen was a new one or the one to which he had already dispatched assistance.)
From society's viewpoint, the problem was one of reducing the 60 calls to 6. If there were perfect communication, the drivers passing by could count off by 50s and every 50th driver could call. Or the motorist could take down his sign and display it only to every tenth driver passing by.
These solutions were idealistic and impractical. What the Harvard Samaritan needed was a way to carry out his exact share of society's obligation. His share was one tenth of a phone call.
There is, of course, no such thing as one tenth of a phone call. Phone calls are not divisible. But there is such a thing as a mathematical expectancy of one tenth of a phone call.
"Look," he told the girl, "when the impulse strikes you, say 'Veritas' and I'll peek at the second hand of my watch. If it is somewhere between zero and six, we'll call. Otherwise, we won't."
"Veritas," she said.
It was 17 seconds after the minute. They did not call.
Later, he asked Thomas Schelling, Harvard's specialist in games and strategy, whether they had acted properly.
"Exactly," said Schelling, "given two requirements: that your date wouldn't have felt resentful if you did stop, and that you don't feel guilty because you didn't stop."
These requirements, as Schelling pointed out, are what hang up the idea of a draft lottery. Before the lottery, everyone who has a number in the fish bowl has an equal expectancy of military service. Nothing could be more fair. But after the drawing, those who were not drafted might feel guilty. Both feelings, while understandable, represent a lack of appreciation for the beautiful, numerical fairness of the solution. It is a common failing. At cocktail parties, when he tells the story of the stranded motorist, the Harvard man finds that people insist on trying to make him feel guilty for not stopping and calling the highway patrol. They fail. He did his precise duty and avoided the errors of either underperformance or overperformance.
This case approaches the heart of game theory, except for one remaining element: conflict. The theory reaches its finest applications when there is competition, when one man's losses are anotherman's gains. To appreciate this fact, it helps to consider a basic example in which the conflict is total--two players and only one can come out ahead. This is the two-person, zero-sum game. It is called zero-sum because when you match the losses of the loser against the gains of the winner, they balance out to zero.
Ponder the plight of a junior Government executive in Washington who got his job when the Democrats seemed to have a permanent lease on the capital. Now, sending that the party in power could change, he wishes to cement his social contacts with Republicans. At the same time, he does not want his Democratic superiors to learn that he is doing this. And he must watch his step, because there is a female columnist for a Washington newspaper who nurses an old grudge against him and would like nothing better than to tell the world about his presence in the wrong sort of company.
On Saturday night, there will be two fund-raising cocktail parties--one for a Democratic Senator, one for a Republican. He gets invitations to both. So does she. Each knows the other has these invitations. Which should he accept? To decide, he first assigns a numerical value to each of the four possible outcomes. His thinking might go like this:
"The best possible deal is for me to go to the Republican affair while she goes to the Democratic one. So I'll make that choice worth ten points.
"If she goes to the Democratic bash and I do, too, it's still not such a bad deal. At least, I'll be seen in the right place and I can thumb my nose at her Seven points.
(continued on page 230)Playing for the Upper Hand(continued from page 112)
"Next-best outcome is for me to go to the Democratic affair while she drinks with the Republicans. I keep my nose clean, even if I don't get any credit for it. Three points.
"Worst of all is for both of us to go to the Republican thing. She catches me there and I could get fired. Score it zero."
You may challenge, if you wish, the neatness by which complex values are assigned simple numbers. But for this example, all you need accept is the order of the priorities and their logic is self-evident, at least to the bureaucratic mind. To keep track of the possible outcomes, he puts them in matrix form:
The numbers in each cell are his payoffs. Since her aim is to cause him as much pain as possible, the same numbers are her losses. He wants to maximize his pay-off, she wants to minimize it.
When you look at the matrix and reflect a bit, the solution becomes obvious. Her best move, no matter what he does, is to go to the Republican event. That way, she can hold him to three or zero, instead of giving him a chance for seven or ten. Because he knows that she is as smart as he is, he must assume she'll figure this out. So he must pass up the temptation to try for the best possible score and maximize his minimum gain by hoisting his drinks with the Democrats. That way, he'll score no less than three and, if she does turn out to be more foolish than he thought, there's a chance of getting the seven.
Their strategies converge, then, in the upper-right-hand cell. He goes to the Democrats' party, she parties with the Republicans. Each has the satisfaction of knowing that, from his or her point of view, he or she is assured of the least of the worst possible outcomes.
Where opposing strategies naturally converge in this manner, game theorists call it a saddle point. Most games don't have saddle points, but when they do, the solution is automatic. And there is a shorthand way to spot this condition in a game matrix: Look for a cell with a number that is both lowest in its row and highest in its column. If there is such a cell, that's the saddle point, and that's where you should play. If none exists, then there are no simple choices that will enable each side to minimize risk and seek the best feasible gain with a straightforward decision. And on that dismal note, matters might stand--except for a proposition advanced by Von Neumann and Morgenstern in their classic work.
They showed how to get around the awkward case of no single best strategy by using probability and mathematical expectancy. Their proof is rather intricate. John W. McReynolds, the political scientist, once asked a friend whose work had contributed to the theory how long it would take to lead him through the book if he picked up a couple of years' work in mathematics first.
"He told me," McReynolds reported in the American Journal of Physics, "that it would take him--if he had nothing else to do--three or four months to get through it himself. We let it go at that."
Nonmathematicians must, therefore, accept a good deal on faith. But it is easy to see that it works. First, refresh yourself on these two simple rules of probability:
1. To find the probability that both of two separate events will occur, multiply their separate probabilities. (The probability of flipping heads once is .50. The probability of doing it twice is .50 times .50, or .25.)
2. To find the probability that either of two mutually exclusive events will occur, add their separate probabilities. (The probability of getting heads or tails on one flip of the coin is .50 plus .50, or one; i.e., certainty.)
Which leads to the next case. This time, picture yourself the hero. On a double date, you have met a long-limbed redhead who shares your passion for skiing. Unfortunately, she was the other guy's date, not yours. You call the next day and invite her for a weekend of winter sport. She accepts, provided the two of you go either to Snowville, New Hampshire, or to Aspen, Colorado, the only places she skis.
This is fine with you. Unfortunately, her boyfriend finds out. He is insecure about women and so you are concerned, though not surprised, when you hear that he has resolved to track you down and create an unpleasant scene. While he knows that you will be in Aspen or Snowville, he will have time to look for you in only one place.
You construct the matrix. Assigning pay-offs is easy. The opponent wants to catch you with his girl. You don't want to get caught. So you express each pay-off in terms of the probability of not getting caught at each location. This takes some judgment, of course. You have been to Aspen and you know that it is big and tends to be crowded, and you figure you'd have a 60-percent chance of getting lost in the crowd. Snowville is more intimate. If you took the girl there and the boyfriend also looked there, your chance of ducking him would be only 40 percent.
The numbers, remember, are pay-offs to you. Two of the cells are worth 100, because your chance of success reaches certainty if you ski in one location while he's looking in the other.
You look for a saddle point and find there is none. No number is lowest in its row and highest in its column. That being the case, your impulse may be to take the girl to Aspen, where the odds of not being found are the more favorable. But before deciding, you might have the same sort of conversation with yourself that the Harvard guy had with his girl on Route 128:
First thought: Hell, why don't I take her to Aspen? A 60-percent chance of success beats a 40-percent chance.
Second thought: Don't be simple-minded. He'll expect me to go to Aspen for that reason. Therefore, I should go to Snowville.
Third thought: But he'll figure that I expect him to expect me to go to Aspen and that I would therefore go to Snowville. So that's where he'd look, and so I should go to Aspen.
Fourth thought: No, he'll expect me to expect him to expect me to...arrrrgh.
There is a way out of all this. Ancient Chinese warriors used it to decide their routes of attack. If you don't want the enemy to figure out what you are thinking, don't think. Flip a coin, instead. If you reach a decision through a random device, no one will be able to read your intentions, because you won't know them yourself. Primitive hunters unwittingly followed the same idea when they cracked bones and studied patterns in the cracks to decide where to look for game. It worked. While they thought the gods were telling them what to do, the effect was to randomize their searches so that the animals would never be able to sense a pattern and stay out of the way. A fleeing rabbit is another example. Its zigs and zags, leaps and bounds are governed by unconscious nerve centers. The hunter can't solve the pattern and predict the right place to aim his rifle, because the rabbit itself doesn't know which way it'll jump.
So flipping a coin is one way to decide where to ski. But it is not quite the best way. Von Neumann's contribution was that he saw how you can mix the odds so that you can be indifferent to your opponent's move. Only then will you have fully minimized your expected loss--or maximized your expected gain. The trick is to weight the odds of your taking each of your two choices--much as the Harvard humanitarian weighted his odds of helping the stranded motorist--so that the expected effects of the two alternatives are equalized. If that sounds complicated, don't brood about it. Just follow this rule of thumb: Weight the odds for each of your possible moves by the difference between the pay-offs for the other move.
To be more specific, take the difference between your two possible Snowville pay-offs and use that value to weight the Aspen odds. Take the difference between your Aspen pay-offs and assign the resulting value to Snowville. Thus:
Snowville: 100 --60
(Aspen: pay-offs) = 40
Aspen: 100 --40
(Snowville pay-offs) = 60
Total 100
What these numbers mean is that you should arrange things so that you have 40 chances out of 100 of going to Snowville and 60 chances out of 100 of going to Aspen. That the total adds to the round sum of 100 in this case is merely a convenient coincidence. In some games, it might be 7 out of 12 or 13 out of 208. Now, how are you going to give yourself 40 chances out of 100 of going to Snowville and 60 chances out of 100 of going to Aspen? Put 40 black marbles and 60 white marbles in a hat, close your eyes and draw one. Or make it four black and six white and use a Mason jar. You go to Snowville if you draw black and to Aspen if you draw white.
Pause now and appreciate the beauty of this way of mixing strategies. You have trimmed the chances of getting caught with the redhead from a nerve-racking 40 percent--which would have been the case had you and the boyfriend both done the obvious and gone to Aspen--to a more relaxing 24 percent. And you have limited him to this 24-percent chance of finding you, no matter what he does. The proof:
1. He goes to Snowville. Odds that you'll be there are 40 percent. Chance of his finding you if you are there is 60 percent. Forty percent of a 60-percent chance is a 24-percent chance.
2. He goes to Aspen. There is a 60-percent chance you will be there and a 40-percent chance he'll find you if you are. Sixty percent of a 40-percent chance is a 24-percent chance. Indifference!
Even if he catches you and blacks your eye, you'll know you didn't make an avoidable mistake.
Suppose the boyfriend knows game theory, too! He can do this much: He can assure himself that no matter what you do, he will have at least a 24-percent chance, which is, after all, better than the zero he would get by guessing wrong. And he does it, of course, by getting a Mason jar and putting in four black marbles and six white marbles.
You can appreciate the theory's appeal. Economists leaped on it as a way to aid decision making in the market place, then military thinkers started looking for adaptations to their own use. Psychologists tried it out in laboratory games, to see if people would follow it intuitively. They did, up to a point. It's better to figure out the numbers for yourself. While it may sound like a parlor game, the theory is taken seriously by hard-nosed businessmen. When the late Edward G.Bennion was consulting economist for Standard Oil of New Jersey, he demonstrated how game theory could be used in making capital budgeting decisions. In one of his examples, described in Harvard Business Review, the businessman assumes that nature--or, if you like, God--is the opponent. However, he does not assume a vengeful, Old Testament God consciously persecuting him. Rather, it is Einstein's dice-playing God. This makes an important difference.
Bennion's illustration involves a company with some extra cash on hand and two choices for putting it to work: plant expansion or investment in securities. "Nature" also has two moves: pushing the business cycle toward prosperity or toward recession. If there is prosperity, investment in plant will yield 17 percent and securities will yield 5 percent. If there is recession, plant will yield only one percent and securities will yield four percent. Management goes to the company economist and asks him to do what Dan Ellsberg did when he studied the wind direction during that California fire: figure out the odds on nature's side. The economist consults his charts, checks the latest indicators and decides there is a 40-percent chance of prosperity and a 60-percent chance of recession. Management prepares its matrix:
Since you have the advantage of being able to estimate the probabilities of nature's possible moves, you ignore the saddle point. It is as though you could peek at the other player's cards, and you want to make the most of this opportunity.
The unsophisticated manager might look at this matrix and decide that since recession is more likely than prosperity, the safe thing is to invest in securities. This will ensure a minimum four-percent return and avoid the risk of getting the one percent that would be the most probable outcome of an investment in plant. But this kind of thinking ignores mathematical expectancy.
The expected return from securities, with the given odds, is 40 percent of five plus 60 percent of four. It works out to 4.4 percent. The expected return from plant, figured the same way, is 40 percent of 17 plus 60 percent of one: 7.4 percent. Why are the separate probabilities added? Because prosperity and recession are mutually exclusive, like heads and tails on a coin. One will happen, not both.
Because an expected return of 7.4 percent beats 4.4 percent, the company, then, should invest in plant. But Bennion suggests an even nicer judgment. The company should calculate, he says, the level of probabilities for recession and prosperity that will leave management indifferent to its investment choice. High school algebra produces the answer: When the odds in favor of recession reach 80-20, it makes no difference which action the company takes. Its expected return is the same: 4.2 percent. The moment recession is judged to have a higher-than-80-percent probability, the company should shift from plant to securities. This knowledge gives the businessman a figure to paste on the wall while he thinks ahead to possible changes in the economic trend. It also tells him exactly how accurate his economist's forecasts must be to keep the company out of drouble. If the economist predicts a 60-percent chance of recession and the indifference point is 80 percent, then top management can relax as long as it trusts the economist to keep his forecasting within a 20-percent error margin.
The range of possible games that have been studied and analyzed since 1944 goes far beyond the basic two-person, zero-sum case. The going gets murky when you deal with the possibility of mutual gain for the players mixed with conflict. Such non-zero-sum games are more often found in life, and Von Neumann and Morgenstern did not shrink from considering them. To do so, they had to push modern math to its outer limits. But along the way, they opened up some new ideas and novel ways of thinking that can be handled on a non-mathematical level.
It helps to think about the non-zero-sum game if you imagine nature--or Einstein's unconcerned, dice-rolling God--as a third player in the game. If you and your real-life opponent are both going to win, someone should pay, and so you can think of nature as paying. The problem, then, becomes one of finding the cell in the matrix where you both come out ahead. And if there is more than one such cell, you must figure out a way to make your strategies converge on the same one.
Immediately, two problems arise: coordination and bargaining. The game may be structured so that you can't communicate, like two lost infantry patrols in the jungle trying to join forces. And even if you do coordinate your moves and conspire to obtain a net gain from nature, there is the problem of sharing the spoils of this victory. If you can't agree on how to share it, you may both end up losing.
Buying a used car is a non-zero-sum game. Somewhere between the highest price you are willing to pay and the lowest price for which the dealer will sell is an area of mutual interest. Within this zone of overlap are the prices where each of you would rather make a deal than call the whole thing off. The problem is to find the exact point within that zone to make the deal. The conflict comes because you want to deal in the low end of the zone; he wants to settle at the high end. Somehow, an equilibrium point has to be found.
One way to establish an equilibrium point is for one side to convince the other that he is committed to not going beyond a specific price. One way to do this is to shop only for cars priced beyond your ability to pay. If you cannot pay the asking price, you are more likely to convince the dealer that you will not, because truth, as Schelling has noted, is easier to demonstrate than falsehood. "The sophisticated negotiator," he says in The Strategy of Conflict, "may find it difficult to seem as obstinate as a truly obstinate man. If a man knocks at a door and says that he will stab himself on the porch unless given ten dollars, he is more likely to get the ten dollars if his eyes are bloodshot."
Another way to establish an equilibrium point is to offer to "split the difference." There is no inherent logic in splitting the difference, except that it does provide a distinctive landmark on which the players can coverage. Everyday life is full of such cues for coordination. Schelling demonstrated their existence in a fascinating series of experiments. In one case, he posed this problem to a sample of 41 persons in New Haven:
You are to meet someone in New York City. You both know the date of the meeting but not the hour or the place. Where would you go and when? A majority of his sample chose the information booth at Grand Central Station. Virtually all picked the same time--noon.
When large groups of people are involved, tacit coordination becomes more difficult. In 1958, when the first Negro family moved into a previously all-white section of northwest Washington, panic selling was averted by a group committed to fair housing. Dominated by white liberals and welcoming Negro newcomers, it sought to stabilize a mixed racial balance by encouraging white buyers and fighting a real-estate-industry pact to make the area exclusively Negro. This organization stopped short of setting a fixed ratio of black to white. A quota system was too much like discrimination for liberal consciences. So while their efforts slowed the process, the neighborhood steadily tipped over the years from white to black. Whites were reluctant to move in, because they expected the neighborhood to become all black, and this expectation became self-fulfilling. Negroes were reluctant to move into other, all-white areas, because they did not expect other Negroes to move there. This, too, was self-fulfilling. The problem was that there were no cues, no equilibrium points between zero Negroes and 100 percent Negroes.
The international arms race presents a close parallel. Equilibrium points between total disarmament and total war exist, but they are fragile and hard to find. The world has rested at such a point in recent years, with each of the two major powers having the capacity for annihilating the other and each deterred from using this capacity by the lack of a good defense against retaliation. This is not the most comforting kind of equilibrium, but it is better than none. And the chief disadvantage of building the new "thin" antimissile system is that it will move the balance of power off the tested equilibrium point without any guarantee that there will ever be another, short of total war.
In some bargaining cases, an equilibrium point can be discovered, only to have one side unable to move toward it, for fear of violating a carefully established commitment. Such a violation could damage the credibility of all future commitments, and so the wise negotiator will always help his opponent find a rationalization for backing down. A car dealer is more likely to accept your proposal to pay $100 less than his "final" offer if you agree not to require him to fix the funny thumping noise in the rear wheel. There may be no funny thumping noise. Pretending there is lets him abandon his commitment with grace.
In the best of strategies, commitment communicates itself, eliminating the need to argue. When an advancing army burns the bridges behind it, the flames that bar retreat signal the foe that the invaders have no choice but to advance. The girl who does not want to be seduced tonight may purposely refrain from taking her pill this morning. At Harvard Square, pedestrians and motorists continually challenge each other's commitment. Each wishes to occupy the territory in the middle of the street. Neither wants a bloody accident. Most pedestrians try to fix oncoming drivers with hypnotic stares while taking tentative steps toward the street. The drivers ignore them. Harvard-Radcliffe students who have taken Economics 135 do better. Instead of looking at the approaching traffic, they turn their heads the other way and step out. Drivers, seeing a pedestrian who cannot see them and therefore cannot leap out of the way, are forced to stop. They lose.
Whether or not you develop the habit of reducing problems to game matrices, it is important to learn to quantify your preferences. If your problem is developing a strategy for spending Saturday night, it helps if you have already established that you prefer Marge to Peggy and Louise to both of them. This is called an ordinal scale. An interval scale is even more useful, because it measures the exact degree to which you prefer Louise to Marge and Marge to Peggy. Von Neumann and Morgenstern invented a way to create such scales of preference.
Since Louise ranks at the top of the scale, assign her a value of 100. Now ask yourself the question: Suppose you had a choice between a sure date with Marge and a 95-percent chance of a date with Louise; which would you take? If one choice is as attractive as the other, then Marge has a rating of 95 on your preference scale. Repeat the procedure with Peggy, finding the point at which a sure date with her is equal to a probability (P) of a date with Louise. Peggy's utility is equal to 100 x P. If you would be indifferent to a choice between a sure date with Peggy and a 50-percent probability of a date with Louise, then Peggy's rating is 50 on the scale.
This is highly useful data in decision making. If you find that you like Louise exactly twice as much as Peggy, for example, then you can govern yourself accordingly; e.g., by exposing yourself to twice as much risk for her or by spending twice as much money on her without worrying that you are making an uneconomic decision.
The key to this system of scaling preferences is, of course, the ease with which one can find his own indifference level. Even two children dividing a cake can do it. If the one who gets to cut has to let the other have first choice, the cutter is careful to divide the cake so that he is indifferent to the decision of the chooser. Most of us make such judgements unconsciously every day. Doing it with awareness is more efficient. And more fun. Even so, it is important to recognize that there will always be some situations that will be sticky, no matter how skillfully one wields the slide rule. Game theorists delight in dreaming up brutal examples of cases where coordination is impossible and victory goes to the man who wins and then betrays a confidence. Martin Shubik of Yale postulates the imaginary case of a prison with a central blockhouse that can be entered by only one man. Once inside, this man has two choices: He can push a button that will open the prison gates for ten seconds, long enough for everyone standing near them to get out but leaving him in the blockhouse, or he can push another button that flies his blockhouse over the prison wall and leaves his fellow inmates behind.
The problem, of course, is that the prisoners cannot afford to let anyone enter the blockhouse until they have a found a man they can trust. Once inside, such a man might cease to be trustworthy. Shubik and two colleagues have devised a parlor game that duplicates this situation in real life. The game is called So Long, Sucker. When Shubik tried it out in the parlor with dinner guests, the guests, a husband and wife, went home in separate cabs.
"One of them double-crossed the other," Shubik explains. "I think it was the wife."
The game can be played with cards or chips. A psychiatrist friend of the inventors tried using it in therapeutic sessions. "He hoped it could be controlled," Shubik recalls. "But he found it to be so vicious that he abandoned it."
That this game is as painful as predicted by theory, and that it grew from a line of once obscure mathematical thought begun by Von Neumann and Morgenstern a quarter of a century ago, is comforting evidence that everyday-life applications of game theory do, indeed, exist. If you believe in a capricious universe, if you think of yourself as the product of a random wandering of the atoms, so much the better. Game theory tells us not only how to bring order to randomness but how to use randomness to order our lives.
One can, of course, get too cute about this. Once you start breaking down your daily problems into two-by-two matrices, with your preferences ordered and scaled, you may get a heady sense of fate control that leaves you dangerously exposed. Schelling tells the story of a colleague who planned to lend his office to a friend while he was away but lacked an extra key.
It was, therefore, necessary to hide the key somewhere, and not under the mat, which, as everyone knows, is the first place a burglar looks. Being a game theorist, the key owner was sure he could cope with the problem. It was a two-person, zero-sum game between himself and the burglar.
He considered all the possible hiding places and all the possible moves that he and the burglar might make. He then arrived at the one hiding place that was the least probable location for a burglar to look. It was agreed that the key would be left there. At the end of his day's work, he locked his office door, went to the agreed-upon hiding place and found his sense of personal efficacy brutally shattered. Someone else's key was already there. But it was an unavoidable mistake, and he at least had the comfort of knowing he had rolled his own dice.
Like what you see? Upgrade your access to finish reading.
- Access all member-only articles from the Playboy archive
- Join member-only Playmate meetups and events
- Priority status across Playboy’s digital ecosystem
- $25 credit to spend in the Playboy Club
- Unlock BTS content from Playboy photoshoots
- 15% discount on Playboy merch and apparel