Timezone: »
In the sequential decision making setting, an agent aims to achieve systematic generalization over a large, possibly infinite, set of environments. Such environments are modeled as discrete Markov decision processes with both states and actions represented through a feature vector. The underlying structure of the environments allows the transition dynamics to be factored into two components: one that is environment-specific and another one that is shared. Consider a set of environments that share the laws of motion as an illustrative example. In this setting, the agent can take a finite amount of reward-free interactions from a subset of these environments. The agent then must be able to approximately solve any planning task defined over any environment in the original set, relying on the above interactions only. Can we design a provably efficient algorithm that achieves this ambitious goal of systematic generalization? In this paper, we give a partially positive answer to this question. First, we provide the first tractable formulation of systematic generalization by employing a causal viewpoint. Then, under specific structural assumptions, we provide a simple learning algorithm that allows us to guarantee any desired planning error up to an unavoidable sub-optimality term, while showcasing a polynomial sample complexity.
Author Information
Mirco Mutti (Politecnico di Milano, Università di Bologna)
Riccardo De Santi (ETH Zurich)
Emanuele Rossi (Imperial College London)
Juan Calderon
Michael Bronstein (USI)
Marcello Restelli (Politecnico di Milano)
More from the Same Authors
-
2021 Spotlight: Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning »
Alberto Maria Metelli · Alessio Russo · Marcello Restelli -
2021 : Policy Optimization via Optimal Policy Evaluation »
Alberto Maria Metelli · Samuele Meta · Marcello Restelli -
2022 : Multi-Armed Bandit Problem with Temporally-Partitioned Rewards »
Giulia Romano · Andrea Agostini · Francesco Trovò · Nicola Gatti · Marcello Restelli -
2022 : Equivariant 3D-Conditional Diffusion Models for Molecular Linker Design »
Ilia Igashov · Hannes Stärk · Clément Vignac · Victor Garcia Satorras · Pascal Frossard · Max Welling · Michael Bronstein · Bruno Correia -
2022 : On the Unreasonable Effectiveness of Feature Propagation in Learning on Graphs with Missing Node Features »
Emanuele Rossi · Henry Kenlay · Maria Gorinova · Benjamin Chamberlain · Xiaowen Dong · Michael Bronstein -
2022 : Hyperbolic Deep Reinforcement Learning »
Edoardo Cetin · Benjamin Chamberlain · Michael Bronstein · jonathan j hunt -
2023 Poster: Distributional Policy Evaluation: a Maximum Entropy approach to Representation Learning »
Riccardo Zamboni · Alberto Maria Metelli · Marcello Restelli -
2023 Poster: Truncating Trajectories in Monte Carlo Policy Evaluation: an Adaptive Approach »
Riccardo Poiani · Nicole Nobili · Alberto Maria Metelli · Marcello Restelli -
2023 Poster: Curvature Filtrations for Graph Generative Model Evaluation »
Joshua Southern · Jeremy Wayland · Michael Bronstein · Bastian Rieck -
2023 Poster: Temporal Graph Benchmark for Machine Learning on Temporal Graphs »
Shenyang Huang · Farimah Poursafaei · Jacob Danovitch · Matthias Fey · Weihua Hu · Emanuele Rossi · Jure Leskovec · Michael Bronstein · Guillaume Rabusseau · Reihaneh Rabbany -
2023 Workshop: Temporal Graph Learning Workshop @ NeurIPS 2023 »
Farimah Poursafaei · Shenyang Huang · Kellin Pelrine · Julia Gastinger · Emanuele Rossi · Michael Bronstein · Reihaneh Rabbany -
2022 : Panel »
Vikas Garg · Pan Li · Srijan Kumar · Emanuele Rossi · Shenyang Huang -
2022 Workshop: Temporal Graph Learning Workshop »
Reihaneh Rabbany · Jian Tang · Michael Bronstein · Shenyang Huang · Meng Qu · Kellin Pelrine · Jianan Zhao · Farimah Poursafaei · Aarash Feizi -
2022 Poster: Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs »
Cristian Bodnar · Francesco Di Giovanni · Benjamin Chamberlain · Pietro Lió · Michael Bronstein -
2022 Poster: Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries »
Fabrizio Frasca · Beatrice Bevilacqua · Michael Bronstein · Haggai Maron -
2022 Poster: Multi-Fidelity Best-Arm Identification »
Riccardo Poiani · Alberto Maria Metelli · Marcello Restelli -
2022 Poster: Challenging Common Assumptions in Convex Reinforcement Learning »
Mirco Mutti · Riccardo De Santi · Piersilvio De Bartolomeis · Marcello Restelli -
2022 Poster: Off-Policy Evaluation with Deficient Support Using Side Information »
Nicolò Felicioni · Maurizio Ferrari Dacrema · Marcello Restelli · Paolo Cremonesi -
2021 : GRAND: Graph Neural Diffusion »
Benjamin Chamberlain · James Rowbottom · Maria Gorinova · Stefan Webb · Emanuele Rossi · Michael Bronstein -
2021 Poster: Learning in Non-Cooperative Configurable Markov Decision Processes »
Giorgia Ramponi · Alberto Maria Metelli · Alessandro Concetti · Marcello Restelli -
2021 Poster: Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection »
Matteo Papini · Andrea Tirinzoni · Aldo Pacchiano · Marcello Restelli · Alessandro Lazaric · Matteo Pirotta -
2021 Poster: Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning »
Alberto Maria Metelli · Alessio Russo · Marcello Restelli -
2020 : Invited Talk 1: Geometric deep learning for 3D human body synthesis »
Michael Bronstein -
2020 Poster: An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits »
Andrea Tirinzoni · Matteo Pirotta · Marcello Restelli · Alessandro Lazaric -
2020 Poster: Inverse Reinforcement Learning from a Gradient-based Learner »
Giorgia Ramponi · Gianluca Drappo · Marcello Restelli -
2020 Session: Orals & Spotlights Track 31: Reinforcement Learning »
Dotan Di Castro · Marcello Restelli -
2019 Workshop: Graph Representation Learning »
Will Hamilton · Rianne van den Berg · Michael Bronstein · Stefanie Jegelka · Thomas Kipf · Jure Leskovec · Renjie Liao · Yizhou Sun · Petar Veličković -
2019 Poster: Propagating Uncertainty in Reinforcement Learning via Wasserstein Barycenters »
Alberto Maria Metelli · Amarildo Likmeta · Marcello Restelli -
2018 Poster: Policy Optimization via Importance Sampling »
Alberto Maria Metelli · Matteo Papini · Francesco Faccio · Marcello Restelli -
2018 Poster: Transfer of Value Functions via Variational Methods »
Andrea Tirinzoni · Rafael Rodriguez Sanchez · Marcello Restelli -
2018 Oral: Policy Optimization via Importance Sampling »
Alberto Maria Metelli · Matteo Papini · Francesco Faccio · Marcello Restelli -
2017 Poster: Compatible Reward Inverse Reinforcement Learning »
Alberto Maria Metelli · Matteo Pirotta · Marcello Restelli -
2017 Poster: Adaptive Batch Size for Safe Policy Gradients »
Matteo Papini · Matteo Pirotta · Marcello Restelli -
2014 Poster: Sparse Multi-Task Reinforcement Learning »
Daniele Calandriello · Alessandro Lazaric · Marcello Restelli -
2013 Poster: Adaptive Step-Size for Policy Gradient Methods »
Matteo Pirotta · Marcello Restelli · Luca Bascetta -
2011 Poster: Transfer from Multiple MDPs »
Alessandro Lazaric · Marcello Restelli -
2007 Spotlight: Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods »
Alessandro Lazaric · Marcello Restelli · Andrea Bonarini -
2007 Poster: Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods »
Alessandro Lazaric · Marcello Restelli · Andrea Bonarini