Dynamic Incentives and Markov perfection: Putting the 'Conditional' in Conditional Cooperation


Many economic applications, across an array of fields, use dynamic games to study
strategic interactions that are dynamic in nature. While these games will generically have large
sets of possible equilibria, Markov perfection (MPE) is the main criterion for selection in applied
work. Our paper experimentally examines this assumed selection across a number of simple dynamic
games. Starting from a two-state modification of the most studied static environment–the infinitely
repeated PD game—we work outward, characterizing the response to broad qualitative changes to
the game’s features. Subjects in our experiments show an affinity for conditional cooperation, readily
conditioning their behavior not only on the state but also the recent history of play. More-efficient
history-dependent play is the norm in many treatments, though the frequency of MPE-like play can
be predicted with a modification to an index developed for infinitely repeated games. A dynamic
extension of the basin of attraction is shown to have predictive power for the selection of MPE

Assistant Professor, Department of Economics, UC Santa Barbara
Wednesday, October 28, 2015 - 13:00 to 14:00
Room 23, Department of Industrial Engineering, University of Chile ( Domeyko 2338, second floor, Santiago)
Paper :