Timezone: »
Lifted inference algorithms for representations that combine first-order logic and probabilistic graphical models have been the focus of much recent research. All lifted algorithms developed to date are based on the same underlying idea: take a standard probabilistic inference algorithm (e.g., variable elimination, belief propagation etc.) and improve its efficiency by exploiting repeated structure in the first-order model. In this paper, we propose an approach from the other side in that we use techniques from logic for probabilistic inference. In particular, we define a set of rules that look only at the logical representation to identify models for which exact efficient inference is possible. We show that our rules yield several new tractable classes that cannot be solved efficiently by any of the existing techniques.
Author Information
Abhay Jha (University of Washington)
Vibhav Gogate (UT Dallas)
Alexandra Meliou
Dan Suciu (University of Washington)
More from the Same Authors
-
2015 Poster: Bounding the Cost of Search-Based Lifted Inference »
David B Smith · Vibhav Gogate -
2015 Poster: Fast Lifted MAP Inference via Partitioning »
Somdeb Sarkhel · Parag Singla · Vibhav Gogate -
2015 Poster: Lifted Inference Rules With Constraints »
Happy Mittal · Anuj Mahajan · Vibhav Gogate · Parag Singla -
2014 Poster: An Integer Polynomial Programming Based Framework for Lifted MAP Inference »
Somdeb Sarkhel · Deepak Venugopal · Parag Singla · Vibhav Gogate -
2014 Poster: New Rules for Domain Independent Lifted MAP Inference »
Happy Mittal · Prasoon Goyal · Vibhav Gogate · Parag Singla -
2014 Poster: Scaling-up Importance Sampling for Markov Logic Networks »
Deepak Venugopal · Vibhav Gogate -
2012 Poster: On Lifting the Gibbs Sampling Algorithm »
Deepak Venugopal · Vibhav Gogate -
2010 Poster: Learning Efficient Markov Networks »
Vibhav Gogate · William A Webb · Pedro Domingos