Learning Latent Feature Representations

Many of the empirical successes of machine learning can be characterized as "simple discrimination functions applied to complex representations". The question is: how do we automatically find these representations? In probabilistic modelling, we view this as a problem of finding latent variables, which provide a simpler and often lower-dimensional representation of our high-dimensional data. In the HIPS group, we are constantly developing new ways to construct these kinds of models and apply them in different domains.
Cardinality Restricted Boltzmann Machines. Swersky, K, Tarlow D, Sutskever I, Salakhutdinov R, Zemel RS, Adams RP.  Advances in Neural Information Processing Systems 25. 2012.   { PDF }
Priors for Diversity in Generative Latent Variable Models. Zou, JY, Adams RP.  Advances in Neural Information Processing Systems 25. 2012.   { PDF }
Training Restricted Boltzmann Machines on Word Observations. Dahl, GE, Adams RP, Larochelle H.  Proceedings of the 29th International Conference on Machine Learning. 2012.   { arXiv:1202.5695 [cs.LG] | PDF }