Interesting People mailing list archives

The Uber Dilemma


From: "Dave Farber" <farber () gmail com>
Date: Thu, 17 Aug 2017 07:48:36 -0400




Begin forwarded message:

From: Dewayne Hendricks <dewayne () warpspeed com>
Date: August 16, 2017 at 11:15:54 PM EDT
To: Multiple recipients of Dewayne-Net <dewayne-net () warpspeed com>
Subject: [Dewayne-Net] The Uber Dilemma
Reply-To: dewayne-net () warpspeed com

[Note:  This item comes from friend David Rosenthal.  DLH]

The Uber Dilemma
By Ben Thompson
Aug 14 2017
<https://stratechery.com/2017/the-uber-dilemma/>

By far the most well-known “game” in game theory is the Prisoners’ Dilemma. Albert Tucker, who formalized the game 
and gave it its name in 1950, described it as such:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of 
communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge. 
They hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each 
prisoner a bargain. Each prisoner is given the opportunity either to: betray the other by testifying that the other 
committed the crime, or to cooperate with the other by remaining silent. The offer is:

   • If A and B each betray the other, each of them serves 2 years in prison
   • If A betrays B but B remains silent, A will be set free and B will serve 3 years in prison (and vice versa)
   • If A and B both remain silent, both of them will only serve 1 year in prison (on the lesser charge)

The dilemma is normally presented in a payoff matrix like the following:

What makes the Prisoners’ Dilemma so fascinating is that the result of both prisoners behaving rationally — that is 
betraying the other, which always leads to a better outcome for the individual — is a worse outcome overall: two 
years in prison instead of only one (had both prisoners behaved irrationally and stayed silent). To put it in more 
technical terms, mutual betrayal is the only Nash equilibrium: once both prisoners realize that betrayal is the 
optimal individual strategy, there is no gain to unilaterally changing it.

TIT FOR TAT

What, though, if you played the game multiple times in a row, with full memory of what had occurred previously (this 
is known as an iterated game)? To test what would happen, Robert Axelrod set up a tournament and invited fourteen 
game theorists to submit computer programs with the algorithm of their choice; Axelrod described the winner in The 
Evolution of Cooperation:

TIT FOR TAT, submitted by Professor Anatol Rapoport of the University of Toronto, won the tournament. This was the 
simplest of all submitted programs and it turned out to be the best! TIT FOR TAT, of course, starts with a 
cooperative choice, and thereafter does what the other player did on the previous move…

Analysis of the results showed that neither the discipline of the author, the brevity of the program—nor its 
length—accounts for a rule’s relative success…Surprisingly, there is a single property which distinguishes the 
relatively high-scoring entries from the relatively low-scoring entries. This is the property of being nice, which is 
to say never being the first to defect.

This is the exact opposite outcome of a single-shot Prisoners’ Dilemma, where the rational strategy is to be mean; 
when you’re playing for the long run it is better to be nice — you’ll make up any short-term losses with long-term 
gains.

[snip]

Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/
Twitter: https://twitter.com/wa8dzp





-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/18849915-ae8fa580
Modify Your Subscription: https://www.listbox.com/member/?member_id=18849915&id_secret=18849915-aa268125
Unsubscribe Now: 
https://www.listbox.com/unsubscribe/?member_id=18849915&id_secret=18849915-32545cb4&post_id=20170817074844:0397CEBA-8342-11E7-8002-AB8960245B79
Powered by Listbox: http://www.listbox.com

Current thread: