- Research Areas
Advances in Neural Information Processing Systems 25 (2012)PDF | Google Doc | Google Scholar | BibTex | EndNote
The Restricted Boltzmann Machine (RBM) is a popular density model that is also good for extracting features. A main source of tractability in RBM models is that, given an input, the posterior distribution over hidden variables is factorizable and can be easily computed and sampled from. Sparsity and competition in the hidden representation is beneficial, and while an RBM with competition among its hidden units would acquire some of the attractive properties of sparse coding, such constraints are typically not added, as the resulting posterior over the hid- den units seemingly becomes intractable. In this paper we show that a dynamic programming algorithm can be used to implement exact sparsity in the RBM’s hidden units. We also show how to pass derivatives through the resulting posterior marginals, which makes it possible to fine-tune a pre-trained neural network with sparse hidden layers.
Keywords:cardinality potentials, deep learning, latent features, restricted boltzmann machines, sparsity
Submitted by rpa on Sat, 11/10/2012 - 14:34