Reinforcement learning with restrictions on the action set

12:30
Thursday
20
Mar
2014
Organized by: 

Arnaud Legrand

Speaker: 

Mario Bravo

Teams: 
Keywords: 

INRIA Grand amphi

Consider a 2-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function, and have no information on the other player. Furthermore, we assume that they have restrictions on their own action set such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions.