Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Agent Learning in Open-Endedness Workshop

PufferLib: Making Reinforcement Learning Libraries and Environments Play Nice

Joseph Suarez

Keywords: [ environments ] [ complex ] [ tools ] [ open-ended ]


Abstract:

Common simplifying assumptions often cause standard reinforcement learning (RL) methods to fail on complex, open-ended environments. Creating a new wrapper for each environment and learning library can help alleviate these limitations, but building them is labor-intensive and error-prone. This practical tooling gap restricts the applicability of RL as a whole. To address this challenge, PufferLib transforms complex environments into a broadly compatible, vectorized format that eliminates the need for bespoke conversion layers and enables rigorous cross-environment testing. PufferLib does this without deviating from standard reinforcement learning APIs, significantly reducing the technical overhead. We release PufferLib's complete source code under the MIT license, a pip module, a containerized setup, comprehensive documentation, and example integrations. We also maintain a community Discord channel to facilitate support and discussion.

Chat is not available.