Timezone: »
Recent techniques that integrate solver layers into Deep Neural Networks (DNNs) have shown promise in bridging a long-standing gap between inductive learning and symbolic reasoning techniques. In this paper we present a set of techniques for integrating Satisfiability Modulo Theories (SMT) solvers into the forward and backward passes of a deep network layer, called SMTLayer.Using this approach, one can encode rich domain knowledge into the network in the form of mathematical formulas.In the forward pass, the solver uses symbols produced by prior layers, along with these formulas, to construct inferences; in the backward pass, the solver informs updates to the network, driving it towards representations that are compatible with the solver's theory.Notably, the solver need not be differentiable. We implement SMTLayer as a Pytorch module, and our empirical results show that it leads to models that 1) require fewer training samples than conventional models, 2) that are robust to certain types of covariate shift, and 3) that ultimately learn representations that are consistent with symbolic knowledge, and thus naturally interpretable.
Author Information
Zifan Wang (Carnegie Mellon University)
Saranya Vijayakumar (Carnegie Mellon University)
Kaiji Lu (Carnegie Mellon University)
Vijay Ganesh (University of Waterloo)
Dr. Vijay Ganesh is an associate professor at the University of Waterloo and the Co-Director of the Waterloo Artificial Intelligence Institute. Prior to joining Waterloo in 2012, he was a research scientist at MIT (2007-2012) and completed his PhD in computer science from Stanford in 2007. Vijay's primary area of research is the theory and practice of SAT/SMT solvers aimed at AI, software engineering, security, mathematics, and physics. In this context he has led the development of many SAT/SMT solvers, most notably, STP, Z3 string, MapleSAT, and MathCheck. He has also proved several decidability and complexity results in the context of first-order theories. He has won over 25 awards, honors, and medals to-date for his research, including an ACM Impact Paper Award at ISSTA 2019, ACM Test of Time Award at CCS 2016, and a Ten-Year Most Influential Paper citation at DATE 2008. He is the Editor-in-Chief of the Springer book series "Progress in Computer Science and Applied Logic" (PCSAL) and has co-chaired many conferences, workshops, and seminars including a Simons Institute semester @ Berkeley on Boolean Satisfiability in 2021.
Somesh Jha (University of Wisconsin, Madison)
Matt Fredrikson (CMU)
More from the Same Authors
-
2022 : Best of Both Worlds: Towards Adversarial Robustness with Transduction and Rejection »
Nils Palumbo · Yang Guo · Xi Wu · Jiefeng Chen · Yingyu Liang · Somesh Jha -
2023 Poster: Robust and Actively Secure Serverless Collaborative Learning »
Nicholas Franzese · Adam Dziedzic · Christopher A. Choquette-Choo · Mark R Thomas · Muhammad Ahmad Kaleem · Stephan Rabanser · Congyu Fang · Somesh Jha · Nicolas Papernot · Xiao Wang -
2023 Poster: Unlocking Deterministic Robustness Certification on ImageNet »
Kai Hu · Andy Zou · Zifan Wang · Klas Leino · Matt Fredrikson -
2022 Spotlight: Lightning Talks 2A-2 »
Harikrishnan N B · Jianhao Ding · Juha Harviainen · Yizhen Wang · Lue Tao · Oren Mangoubi · Tong Bu · Nisheeth K. Vishnoi · Mohannad Alhanahnah · Mikko Koivisto · Aditi Kathpalia · Lei Feng · Nithin Nagaraj · Hongxin Wei · Xiaozhu Meng · Petteri Kaski · Zhaofei Yu · Tiejun Huang · Ke Wang · Jinfeng Yi · Jian Liu · Sheng-Jun Huang · Mihai Christodorescu · Songcan Chen · Somesh Jha -
2022 Spotlight: Robust Learning against Relational Adversaries »
Yizhen Wang · Mohannad Alhanahnah · Xiaozhu Meng · Ke Wang · Mihai Christodorescu · Somesh Jha -
2022 Poster: Overparameterization from Computational Constraints »
Sanjam Garg · Somesh Jha · Saeed Mahloujifar · Mohammad Mahmoody · Mingyuan Wang -
2022 Poster: Robust Learning against Relational Adversaries »
Yizhen Wang · Mohannad Alhanahnah · Xiaozhu Meng · Ke Wang · Mihai Christodorescu · Somesh Jha -
2022 Poster: A Quantitative Geometric Approach to Neural-Network Smoothness »
Zi Wang · Gautam Prakriya · Somesh Jha -
2021 Poster: Influence Patterns for Explaining Information Flow in BERT »
Kaiji Lu · Zifan Wang · Piotr Mardziel · Anupam Datta -
2021 Poster: Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles »
Jiefeng Chen · Frederick Liu · Besim Avci · Xi Wu · Yingyu Liang · Somesh Jha -
2021 : Exploring Conceptual Soundness with TruLens »
Anupam Datta · Matt Fredrikson · Klas Leino · Kaiji Lu · Shayak Sen · Ricardo C Shih · Zifan Wang -
2021 Poster: Relaxing Local Robustness »
Klas Leino · Matt Fredrikson -
2021 Poster: A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks »
Samuel Deng · Sanjam Garg · Somesh Jha · Saeed Mahloujifar · Mohammad Mahmoody · Abhradeep Guha Thakurta -
2020 Poster: Smoothed Geometry for Robust Attribution »
Zifan Wang · Haofan Wang · Shakul Ramkumar · Piotr Mardziel · Matt Fredrikson · Anupam Datta -
2019 Poster: Attribution-Based Confidence Metric For Deep Neural Networks »
Susmit Jha · Sunny Raj · Steven Fernandes · Sumit K Jha · Somesh Jha · Brian Jalaian · Gunjan Verma · Ananthram Swami -
2019 Poster: Robust Attribution Regularization »
Jiefeng Chen · Xi Wu · Vaibhav Rastogi · Yingyu Liang · Somesh Jha -
2018 : Semantic Adversarial Examples by Somesh Jha »
Somesh Jha -
2018 Workshop: Workshop on Security in Machine Learning »
Nicolas Papernot · Jacob Steinhardt · Matt Fredrikson · Kamalika Chaudhuri · Florian Tramer -
2018 Poster: Hunting for Discriminatory Proxies in Linear Regression Models »
Samuel Yeom · Anupam Datta · Matt Fredrikson