Timezone: »
Risk management is critical in decision making, and \emph{mean-variance} (MV) trade-off is one of the most common criteria.However, in reinforcement learning (RL) for sequential decision making under uncertainty, most of the existing methods for MV control suffer from computational difficulties caused by the \emph{double sampling} problem. In this paper, in contrast to strict MV control, we consider learning MV efficient policies that achieve Pareto efficiency regarding MV trade-off. To achieve this purpose, we train an agent to maximize the expected quadratic utility function, a common objective of risk management in finance and economics. We call our approach direct expected quadratic utility maximization (EQUM). The EQUM does not suffer from the double sampling issue because it does not include gradient estimation of variance. We confirm that the maximizer of the objective in the EQUM directly corresponds to an MV efficient policy under a certain condition. We conduct experiments with benchmark settings to demonstrate the effectiveness of the EQUM.
Author Information
Masahiro Kato (Cyberagent.Inc)
Kei Nakagawa (Nomura Asset Management co,ltd.)
Kenshi Abe (CyberAgent, Inc.)
Tetsuro Morimura (IBM)
More from the Same Authors
-
2021 Poster: The Adaptive Doubly Robust Estimator and a Paradox Concerning Logging Policy »
Masahiro Kato · Kenichiro McAlinn · Shota Yasui -
2020 Poster: Off-Policy Evaluation and Learning for External Validity under a Covariate Shift »
Masatoshi Uehara · Masahiro Kato · Shota Yasui -
2020 Spotlight: Off-Policy Evaluation and Learning for External Validity under a Covariate Shift »
Masatoshi Uehara · Masahiro Kato · Shota Yasui -
2013 Poster: Solving inverse problem of Markov chain with partial observations »
Tetsuro Morimura · Takayuki Osogami · Tsuyoshi Ide -
2009 Poster: A Generalized Natural Actor-Critic Algorithm »
Tetsuro Morimura · Eiji Uchibe · Junichiro Yoshimoto · Kenji Doya