We are a research group working on both core methods in machine learning and artificial intelligence, as well as collaborative applications across science and engineering. Some of the topics we're interested in include
  • automatic differentiation [1, 2]
  • Monte Carlo methods [3, 4]
  • Bayesian inference [5, 6]
  • ML-accelerated design and simulation [7, 8]
  • group symmetry in machine learning architectures [9]
  • materials science and chemistry [10, 11, 12, 13]
  • computational fabrication [14, 15]
  • generative modeling [16, 17]
  • reinforcement learning and control [18, 19]

Recent Publications

  1. Mirramezani, M., Oktay, D., & Adams, R. P. (2024). A rapid and automated computational approach to the design of multistable soft actuators. Computer Physics Communicationsn. [PDF] bibtex/details
  2. Li, M., Callaway, F., Thompson, W., Adams, R. P., & Griffiths, T. (2023). Learning to learn functions. Cognitive Science, 47(4). [PDF] bibtex/details
  3. Adams, R. P., & Orbanz, P. (2023). Representing and Learning Functions Invariant Under Crystallographic Groups. ArXiv Preprint ArXiv:2306.05261. [PDF] bibtex/details
  4. Liu, S. (2023). Scalable and Interpretable Learning with Probabilistic Models for Knowledge Discovery [PhD thesis]. Princeton University. [PDF] bibtex/details
  5. Cai, D. (2023). Probabilistic Inference When the Model Is Wrong [PhD thesis]. Princeton University. [PDF] bibtex/details

Recent Blog Posts

Current Collaborators

  • Sigrid Adriaennsens
  • Katia Bertoldi
  • Abigail Doyle
  • Elif Ertekin
  • Tom Griffiths
  • Peter Orbanz
  • Peter Ramadge
  • Szymon Rusinkiewicz
  • Yee Whye Teh
  • Eric Toberer

Funding

  • National Science Foundation
  • Siemens
  • Templeton Foundation
  • Princeton Catalyst Initiative
  • Schmidt DataX
  • Ansys