Stochastic stability of reinforcement learning in positive-utility games

Authors Georgios C. Chasparis
Title Stochastic stability of reinforcement learning in positive-utility games
Type article
Journal IEEE Transactions on Automatic Control
Publisher IEEE
Volume early access
DOI 10.1109/TAC.2019.2895300
ISSN Online ISSN: 1558-2523
Month January
Year 2019
SCCH ID# 17058

This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.