Some Game-Theoretic considerations : approaching utopia
Kadri Tõldsepp


A fundamental assumption throughout Game Theory is that all participating agents are fully rational and selfish: they only care about maximizing their own "satisfaction" and they act in such a way.
This satisfaction is formally modeled through utility functions that can incorporate all the complex parameters and interactions that form an agent's preference. Consider the following simple auction setting: An auctioneer wants to sell a piece of art to three interested bidders, which will write in a sealed envelope their bids and simultaneously submit them to him. Then, the auctioneer will open the envelopes and decide wich bidder gets the item and for what price. And lets say that the submitted bids are bider 1: 10€, bidder 2: 6€ and bidder 3: 4€.

If the auctioneer uses a first-price auction, ie sell the item to bidder 1 for a price of 10€, the auction will not be truthful: In case bidder 1 (lies and) declares a lower bid of 7€ he still wins the item
and pays a lower price of 7€. Truthfulness is the cornerstone of the field of Mechanism Design: we want to design mechanisms at which honest reporting is the best strategy for all participants, so they have no reason to lie and strategize. The most well-known truthful auction is the second-price (Vickrey) auction. Simple yet very powerful in achieving truthfulness: sell the item to bidder 1 but for
the price of the bidder's 2 bid. In this way, bidder 1 has no reason to declare a lower false bid in the range between 10€ and 6€, because he will pay 6€ anyways.

But do these models of "selfishness" and "truthfulness" really capture "real" everyday social interaction? For example, there have been experimental studies showing that participants in such auctions will act "irrationally", in a way to "externally" affect others: bidder 2 is not getting the item, so he can lie and misreport 9€ just to harm the winning bidder's price (spiteful behavior), or he can report a lower bid of 4€ just to drive the bids (and thus the prices) down (altruistic behavior).

In a recent paper (http://arxiv.org/abs/1208.3939) we are trying to formalize utility models that incorporate these behaviors, called externalities, as well as propose "stricter" notions of truthfulness that not only do not give agents reasons to deviate from their true bids, but also punishes  them in case of lying. How possible is it to approximate a "utopia" where no such "irrational" malicious phenomena occur? Especially this notion of strong truthfulness may be very
relevant in Security settings where the "honest" behavior of participating parties is of utmost importance.