Showing posts with label game theory. Show all posts
Showing posts with label game theory. Show all posts

Thursday, October 27, 2011

Contracts with empty promises

I always feel small talk has no specific purpose and is a waste of time. In fact, every time somebody asks me how I do, I launch into a well reasoned explanation of my current state of affairs, while my intelocutor is just expecting a "well-thanks-and you?" Yet, some people have found value in such chitchat, see a previous report. But what about contracts that have clause that have no chance of being met? Why would one allow empty promises in a legally binding contract?

David Miller and Kareen Rozen look at contracts that involve team work in a complex production environment, where opportunities for moral hazard abound. Performance clauses are hard to specify and you want to use peer monitoring and pressure rather than checking for the result of each individual task. Obviously, monitoring is costly, and along with statistical complementarities in the success rate, this implies that it could be optimal to delegate all the production to one person and the monitoring to another (the least productive one, according to the "Dilbert Principle"), and the latter may resort to wasteful punishment: naming and shaming, and even firing. It is wasteful, because it does not provide any direct benefit to the supervisor and it may not even be subgame perfect. Where are the empty promises? The one performing tasks can promise to fulfill them, but it obvious to all that they cannot be all successful because of some outside probability of failure. But the supervisor is willing to forgive failures, without knowing whether chance or moral hazard are at play.

Tuesday, September 13, 2011

Why is blackmail costly?

Blackmail is a strange concept. Threatening to reveal information is legal. Asking money for a service is legal. But doing both at the same time is illegal. Even stranger, when the transaction is initiated by the one who could be harmed by the revelation, this is technically a bribery, it is legal. So why this difference? The conventional answer is that blackmail is about rent-seeking. But if the damaging information is freely available, there is no welfare loss justifying the criminalization of blackmail.

Oleg Yerokhin claims the justification can lie within the bargaining power structure between the two parties. Indeed, when the information holder is a monopolist, he will have more power than socially optimal, and should thus be punished to internalize this cost. But when the target is a monopolist, then the outcome reverses, and the blackmailer should be subsidized rather than punished. Yet, I hardly find this argument convincing on the grounds that blackmailers are certainly less likely to be monopolists than victims. Indeed, information is duplication at zero or close to zero cost, making it difficult for a monopoly to arise in such a situation. But this information can easily be about one particular person only.

Monday, September 5, 2011

Emotions in economic interactions

How do you get people to cooperate. By increasing utility, of course, but that is difficult to measure, obviously, and there may some components beyond rationality in emotional contexts. However, we have some interesting ways to get some neurological hints about positive and negative emotions by measuring the conductance of skin. This may help to explain why people are sometimes willing to hurt themselves in order to punish others.

Mateus Joffily, David Masclet, Charles Noussair and Marie-Claire Villeval conduct an experiment where cooperation, free-riding and punishment can happen. They measure skin conductance to reveal the intensity of emotions and let players reveal whether their emotions are positive or negative. Cooperation and punishment of free-riding elicit positive emotions, the latter indicating that emotions can override self-interest. That is also because punishment relieves some of the negative emotions from observing free-riding. And one does not like being punished, which lends one to cooperate more in the future. Finally, people like being in a set-up where sanctions are possible, in particular because it allows a virtuous circle of emotions that reinforce each other and lead to more cooperation.

Monday, August 29, 2011

Market failure and political outcomes

In a perfect economic world, perfect competition and the lack of frictions or externalities make it possible to obtain the most efficient outcome. But once any of those assumptions is lost, outcomes are going to be worse than the first best. In particular, once there are rents to be obtained, from frictions or imperfect competition, the beneficiaries of those rents will try to protect them. And they will try to influence political outcomes in their favor.



Madhav Aney, Maitreesh Ghatak and Massimo Morelli argues that this influence peddling reinforces the market failures. As an example, they take a model of misallocation of entrepreneurial talent due to the imperfect observability of that talent. The resulting power structure then votes on institutions that reinforce such a class structure and thus amplify misallocations and market failures.



Now think about the apparently ever-increasing proportion of lawyers in the political class.

Friday, August 12, 2011

Procrastination in team work

Teamwork can turn out very bad when moral hazard is present: if people do not trust each other or care about each other, nothing gets done. When doing research, we are lucky to be able to choose our co-authors, but even then things can turn for the worse if a team member looses interest. And we remember how bad it is when a team is forced upon you during our studies. Now, this is all very loose reasoning, let us get on firmer ground.



Philipp Weinscheink studies team production in a dynamic game with moral hazard. If all players are rewarded equally, they will all wait until the last moment to participate. This is very like what we often see in political negotiations with a deadline, where nothing happens until the last moment, and player consciously wait for the last moment. The same often happens at collective agreement bargaining. And of course, the outcomes are far from optimal, as the debt ceiling mess in the US has recently shown.



If the rewards are not equally distributed, the outlook is better. Quite obviously, those who are rewarded better will tend to procrastinate less. But they are not necessarily better off that those less rewarded, as they put more effort. Thus, second-best contracts are unequal ones. But all this falls apart if some players have limited liability (which means they have better outside options) or if some can sabotage. Then everyone will wait until the last moment and very little gets done. Think about the US situation again...

Friday, July 8, 2011

How should I lie?

Is it OK to lie? The usual answer is that it depends. Big lies are frowned upon, while small lies are somewhat tolerated. Does this necessarily mean I should avoid lying?

Gerald Eisenkopf, Ruslan Gurtoviy and Verena Utikal study the size of lies in an experimental setup. Their first observation is that it depends whom you are lying to. Honest people punish according to lie size, while chronic liars really do not care. Their second is that big lies are punished more than small lies. This is hardly surprising. What would have been more interesting to learn would be whether the punishment function is concave or convex, that is, whether the returns to scale are increasing or decreasing. In some sense we already have some idea about this by looking at tax penalties, which are usually proportional to the offense, plus a fix cost. And ultimately, one would want to compare the shape of the penalty function to that of the benefit function. Then I would finally know whether I should lie big or small.