Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes a new online structure learning technique for feed-forward and recurrent SPNs. The algorithm is demonstrated on real-world datasets with continuous features for which it is not clear what network architecture might be best, including sequence datasets of varying length.
( events) Timezone: »
Thu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #162
Online Structure Learning for Feed-Forward and Recurrent Sum-Product Networks
[ Paper] [ Poster] [ 3-Minute-Video]