Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Reinforcement Learning

Embodiment perspective of reward definition for behavioural homeostasis

Naoto Yoshida · Yasuo Kuniyoshi


Abstract:

In this work, we propose a neural homeostat, a neural machine that stabilises the internal physiological state through interactions with the environment. Based on this framework, we demonstrate that behavioural homeostasis with low-level continuous motor control emerges from an embodied agent using only rewards computed by the agent's local information. Using the bodily state of the embodied agent as the reward source, the complexity of the reward definition is `outsourced' into the coupled dynamics of the bodily state and the environment. Therefore, our definition of the reward is simple, but the optimised behaviour of the agent can be surprisingly complex. Our contributions are 1) an extension of homeostatic reinforcement learning to enable continuous motor control using deep reinforcement learning; 2) a comparison of homeostatic reward definitions from previous studies, where we found that homeostatic rewards using the difference of the drive function performed best; and 3) a demonstration of the emergence of adaptive behaviour from low-level motor control through direct optimisation of the homeostatic objective.

Chat is not available.