Timezone: »
The last few years have witnessed a surge in methods for symbolic regression, from advances in traditional evolutionary approaches to novel deep learning-based systems. Individual works typically focus on advancing the state-of-the-art for one particular class of solution strategies, and there have been few attempts to investigate the benefits of hybridizing or integrating multiple strategies. In this work, we identify five classes of symbolic regression solution strategies---recursive problem simplification, neural-guided search, large-scale pre-training, genetic programming, and linear models---and propose a strategy to hybridize them into a single modular, unified symbolic regression framework. Based on empirical evaluation using SRBench, a new community tool for benchmarking symbolic regression methods, our unified framework achieves state-of-the-art performance in its ability to (1) symbolically recover analytical expressions, (2) fit datasets with high accuracy, and (3) balance accuracy-complexity trade-offs, across 252 ground-truth and black-box benchmark problems, in both noiseless settings and across various noise levels. Finally, we provide practical use case-based guidance for constructing hybrid symbolic regression algorithms, supported by extensive, combinatorial ablation studies.
Author Information
Mikel Landajuela (Lawrence Livermore National Labs)
Machine Learning Researcher at Lawrence Livermore National Laboratory (Computational Engineering Directorate), holding a Ph.D. from Université Pierre et Marie Curie and Inria.
Chak Shing Lee (Lawrence Livermore National Labs)
Jiachen Yang (Georgia Institute of Technology)
Ruben Glatt (Lawrence Livermore National Laboratory)
With a background in Mechatronics and Mechanical Engineering, Ruben has turned to Artificial Intelligence where his main interest lies in Machine Learning (ML) research with a focus on Reinforcement Learning (RL), autonomous systems, and applications in energy efficiency. He received his Ph.D. in Computer Engineering in the area of ML at the University of Sao Paulo (USP), Brazil, holds a master degree in Mechanical Engineering in the area of controlling mechanical systems from the Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Brazil, and a Diplom-Ingenieur degree in Mechatronics in the area of sensors and robotics from the Karlsruhe Institute of Technology (KIT), Germany. Ruben has acquired years of professional experiences before and during his studies while working in the technology and energy sector, as well as in the organization of international ML conferences. After converting from a postdoctoral position at the Lawrence Livermore National Laboratory, USA, he is now working as a Machine Learning Researcher on a variety of RL projects to develop methods for collaborative autonomy in multi- agent systems, interpretable RL, and real-world applications. Ruben represented the postdocs at the Lab as Chair of the Lawrence Livermore Postdoc Association and member of the Institutional Postdoc Program Board. He also engages in community efforts and is currently the Vice-Chair of the IEEE Computer Society Oak land/Eastbay/San Francisco chapter and a voting member on the IEEE Computer Society Artificial Intelligence Standards Committee (C/AISC). Ruben’s long term research interest lies in successfully applying RL techniques to real-world challenges to accelerate and improve decision-making, autonomously or as a support tool for humans, preferably for applications in energy efficiency and smart mobility systems.
Claudio P Santiago (Lawrence Livermore National Laboratory)
Ignacio Aravena (Lawrence Livermore National Labs)
Terrell Mundhenk (Lawrence Livermore National Lab)
Garrett Mulcahy (University of Washington)
Brenden K Petersen (Lawrence Livermore National Laboratory)
More from the Same Authors
-
2021 Poster: Symbolic Regression via Deep Reinforcement Learning Enhanced Genetic Programming Seeding »
Terrell Mundhenk · Mikel Landajuela · Ruben Glatt · Claudio P Santiago · Daniel faissol · Brenden K Petersen -
2020 Poster: Learning to Incentivize Other Learning Agents »
Jiachen Yang · Ang Li · Mehrdad Farajtabar · Peter Sunehag · Edward Hughes · Hongyuan Zha -
2019 : Lunch + Poster Session »
Frederik Gerzer · Bill Yang Cai · Pieter-Jan Hoedt · Kelly Kochanski · Soo Kyung Kim · Yunsung Lee · Sunghyun Park · Sharon Zhou · Martin Gauch · Jonathan Wilson · Joyjit Chatterjee · Shamindra Shrotriya · Dimitri Papadimitriou · Christian Schön · Valentina Zantedeschi · Gabriella Baasch · Willem Waegeman · Gautier Cosne · Dara Farrell · Brendan Lucier · Letif Mones · Caleb Robinson · Tafara Chitsiga · Victor Kristof · Hari Prasanna Das · Yimeng Min · Alexandra Puchko · Alexandra Luccioni · Kyle Story · Jason Hickey · Yue Hu · Björn Lütjens · Zhecheng Wang · Renzhi Jing · Genevieve Flaspohler · Jingfan Wang · Saumya Sinha · Qinghu Tang · Armi Tiihonen · Ruben Glatt · Muge Komurcu · Jan Drgona · Juan Gomez-Romero · Ashish Kapoor · Dylan J Fitzpatrick · Alireza Rezvanifar · Adrian Albert · Olya (Olga) Irzak · Kara Lamb · Ankur Mahesh · Kiwan Maeng · Frederik Kratzert · Sorelle Friedler · Niccolo Dalmasso · Alex Robson · Lindiwe Malobola · Lucas Maystre · Yu-wen Lin · Surya Karthik Mukkavili · Brian Hutchinson · Alexandre Lacoste · Yanbing Wang · Zhengcheng Wang · Yinda Zhang · Victoria Preston · Jacob Pettit · Draguna Vrabie · Miguel Molina-Solana · Tonio Buonassisi · Andrew Annex · Tunai P Marques · Catalin Voss · Johannes Rausch · Max Evans