`

Timezone: »

 
PYLON: A PyTorch Framework for Learning with Constraints
Kareem Ahmed · Tao Li · Nu Mai Thy Ton · Quan Guo · Kai-Wei Chang · Parisa Kordjamshidi · Vivek Srikumar · Guy Van den Broeck · Sameer Singh

Thu Dec 09 08:35 AM -- 08:50 AM (PST) @ None
Event URL: https://pylon-lib.github.io/neurips21 »

Deep learning excels at learning task information from large amounts of data, however, struggles with learning from declarative high-level knowledge that can be more succinctly expressed directly. In this work, we introduce PYLON, a neural-symbolic training framework that builds on PyTorch to augment imperatively trained models with declaratively specified knowledge. PYLON lets users programmatically specify constraints as Python functions and compiles them into a differentiable loss, thus training predictive models that fit the data whilst satisfying the specified constraints. PYLON includes both exact as well as approximate compilers to efficiently compute the loss, employing fuzzy logic, sampling methods,and circuits, ensuring scalability even to complex models and constraints. Crucially, a guiding principle in designing PYLON is the ease with which any existing deep learning codebase can be extended to learn from constraints using only a few lines: a function that expresses the constraint and code to incorporate it as a loss. Our demo comprises of models in NLP, computer vision, logical games, and knowledge graphs that can be interactively trained using constraints as supervision.

Author Information

Kareem Ahmed (UCLA)
Tao Li (University of Utah)
Nu Mai Thy Ton (UC Irvine)
Quan Guo (Michigan State University)
Kai-Wei Chang (University of California, Los Angeles )
Parisa Kordjamshidi (Michigan State University)

Parisa Kordjamshidi is an assistant professor of Computer Science & Engineering at Michigan State University. Her research interests are machine learning, natural language processing, and declarative learning-based programming. She has worked on the extraction of formal semantics and structured representations from natural language. She obtained NSF CAREER award on 2019. She is leading a project supported by Office of Naval research to perform basic research and develop a declarative learning-based programming framework for integration of domain knowledge into statistical/neural learning. She is a member of Editorial board of Journal of Artificial Intelligence Research (JAIR), a member of Editorial Board of Machine Learning and Artificial Intelligence, part of the journal of Frontiers in Artificial Intelligence and Frontiers in Big Data. She has published papers, organized international workshops and served as a (senior) program committee member or area chair of conferences such as IJCAI, AAAI, ACL, EMNLP, COLING, ECAI and a member of organizing committee of EMNLP-2021, ECML-PKDD-2019 and NAACL-2018 conferences.

Vivek Srikumar (University of Utah)
Guy Van den Broeck (UCLA)

I am an Assistant Professor and Samueli Fellow at UCLA, in the Computer Science Department, where I direct the Statistical and Relational Artificial Intelligence (StarAI) lab. My research interests are in Machine Learning (Statistical Relational Learning, Tractable Learning), Knowledge Representation and Reasoning (Graphical Models, Lifted Probabilistic Inference, Knowledge Compilation), Applications of Probabilistic Reasoning and Learning (Probabilistic Programming, Probabilistic Databases), and Artificial Intelligence in general.

Sameer Singh (University of California, Irvine)

Sameer Singh is an Assistant Professor at UC Irvine working on robustness and interpretability of machine learning. Sameer has presented tutorials and invited workshop talks at EMNLP, Neurips, NAACL, WSDM, ICLR, ACL, and AAAI, and received paper awards at KDD 2016, ACL 2018, EMNLP 2019, AKBC 2020, and ACL 2020. Website: http://sameersingh.org/

More from the Same Authors