Lawrence Saul

I am a Senior Research Scientist in the Center for Computational Mathematics (CCM) at the Flatiron Institute. I joined CCM in July 2022 as a group leader in machine learning. My current research focuses on high dimensional data analysis, latent variable modeling, and deep learning; other interests include variational inference, optimization, and kernel methods. Before joining Flatiron, I was a research scientist at AT&T Labs and a faculty member at UPenn and UC San Diego. I also served previously as Editor-in-Chief of JMLR and as program chair for NeurIPS. I obtained my PhD in Physics from MIT, with a thesis on exact computational methods in the statistical mechanics of disordered systems.

Research

High dimensional data analysis

How can we discover low dimensional structure in high dimensional data? How does this question change (if at all) when the data is sparse? These are fascinating problems that lie at the intersection of statistics, geometry, and computation.

Manifold learning

If high dimensional data lives on (or near) a low dimensional manifold, then we can attempt to learn a similarity-preserving embedding. Recent work on this problem has revealed surprising connections to models of unsupervised learning in ReLU neural networks.

Deep learning

Deep neural networks provide state-of-the-art performance in many tasks, but the representations they learn are largely inscrutable. To learn more interpretable representations, we may need to re-examine the mathematical foundations of deep learning, particularly the types of nonlinearities and loss functions that are commonly used for training.

Latent variable modeling

Many types of structure in high dimensional data can be modeled via a smaller number of latent variables. An ongoing project is to discover new and richer latent variable models in which exact probabilistic inference remains tractable.

Recent papers

  • L. K. Saul (2022). A nonlinear matrix decomposition for mining the zeros of sparse data. SIAM Journal of Mathematics of Data Science 4(2):431-463. PDF
  • L. K. Saul (2021). An online passive-aggressive algorithm for difference-of-squares classification. In M. A. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan (eds.), Advances in Neural Information Processing Systems 34, pages 21426-21439. PDF
  • L. K. Saul (2021). An EM algorithm for capsule regression. Neural Computation 33(1):194-226 PDF
  • L. K. Saul (2020). A tractable latent variable model for nonlinear dimensionality reduction. Proceedings of the National Academy of Sciences USA 117(27):15403-15408. PDF

Older (representative) papers

  • D.-K. Kim, G. Voelker, L. K. Saul (2013). A variational approximation for topic modeling of hierarchical corpora. Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 55-63. PDF
  • Y. Cho and L. K. Saul (2009). Kernel methods for deep learning. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems 22, pages 342-350. PDF
  • K. Q. Weinberger and L. K. Saul (2009). Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research 10:207-244. PDF
  • F. Sha, Y. Lin, L. K. Saul, and D. D. Lee (2007). Multiplicative updates for nonnegative quadratic programming. Neural Computation 19(8):2004-2031. PDF
  • K. Q. Weinberger and L. K. Saul (2006). Unsupervised learning of image manifolds by semidefinite programming. International Journal of Computer Vision 70(1):77-90. PDF
  • S. T. Roweis and L. K. Saul (2000). Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323-2326. PDF
  • M. I. Jordan and Z. Ghahramani and T. S. Jaakkola and L. K. Saul (1999). An introduction to variational methods for graphical models. Machine Learning 37:183-233. PDF

All papers

Contact