Timezone: »
Policy gradient Reinforcement Learning (RL) algorithms have received much attention in seeking stochastic policies that maximize the average rewards. In addition, extensions based on the concept of the Natural Gradient (NG) show promising learning efficiency because these regard metrics for the task. Though there are two candidate metrics, Kakades Fisher Information Matrix (FIM) and Morimuras FIM, all RL algorithms with NG have followed the Kakades approach. In this paper, we describe a generalized Natural Gradient (gNG) by linearly interpolating the two FIMs and propose an efficient implementation for the gNG learning based on a theory of the estimating function, generalized Natural Actor-Critic (gNAC). The gNAC algorithm involves a near optimal auxiliary function to reduce the variance of the gNG estimates. Interestingly, the gNAC can be regarded as a natural extension of the current state-of-the-art NAC algorithm, as long as the interpolating parameter is appropriately selected. Numerical experiments showed that the proposed gNAC algorithm can estimate gNG efficiently and outperformed the NAC algorithm.
Author Information
Tetsuro Morimura (IBM)
Eiji Uchibe
Junichiro Yoshimoto (Okinawa Institute of Science and Technology)
Kenji Doya (Okinawa Institute of Science and Technology Graduate University)
More from the Same Authors
-
2021 : Mean-Variance Efficient Reinforcement Learning by Expected Quadratic Utility Maximization »
Masahiro Kato · Kei Nakagawa · Kenshi Abe · Tetsuro Morimura -
2021 : Kenji Doya Talk Q&A »
Kenji Doya -
2021 : Invited Talk: Kenji Doya - Natural and Artificial Reinforcement Learning »
Kenji Doya -
2013 Poster: Solving inverse problem of Markov chain with partial observations »
Tetsuro Morimura · Takayuki Osogami · Tsuyoshi Ide