I am a Senior Research Scientist in the
Center for Computational Mathematics (CCM)
at the
Flatiron Institute.
I am part of a large and growing research effort in the area of machine learning, both within my own center (ML@CCM) and across all of Flatiron (ML@FI).
I work broadly across the areas of high dimensional data analysis, latent variable modeling, variational inference, and representation learning.
Within CCM, I am attempting to build a group with diverse backgrounds and interests. We interview seasonally for summer interns, three-year postdocs, and research scientists. This year we are also advertising a joint position for an associate research scientist in CCM and a tenure-track faculty member in the Computer Science Department at Cooper Union.
Before joining Flatiron, I was a tenured faculty member at
UC San Diego and UPenn and a member of the technical staff at
AT&T Labs.
I also served previously as Editor-in-Chief of JMLR and as Program Chair of NeurIPS. Before my work in machine learning, I earned a bachelor’s degree in Physics from Harvard and a doctorate in Physics from M.I.T.
Recent Projects
Variational inference
Given an intractable distribution p, the problem of variational inference (VI) is to find the best approximation q from some more tractable family.
Typically, q is found by minimizing the (reverse) Kullback-Leibler divergence, but in recent papers at ICML and NeurIPS, we have shown how to approximate p by minimizing certain score-based divergences. The first of these papers derives the Batch and Match algorithm for VI with multivariate Gaussian approximations, while the second describes an eigenvalue problem (EigenVI) for approximations based on orthogonal function expansions. In related work, this paper analyzes the inherent trade-offs that arise in VI when a factorized approximation q is used to model a target distribution p that does not factorize.
High dimensional data analysis
Sparse matrices are not generally low rank, and low-rank matrices are not generally sparse. But can one find more subtle connections between these different properties of matrices by looking beyond the canonical decompositions of linear algebra? This paper describes a nonlinear matrix decomposition that can be used to express a sparse nonnegative matrix in terms of a real-valued matrix of significantly lower rank. Arguably the most popular matrix decompositions in machine learning are those—such as principal component analysis, or nonnegative matrix factorization—that have a simple geometric interpretation. This paper gives such an interpretation for these nonlinear decompositions, one that arises naturally in the problem of manifold
learning.
Learning with symmetries: weight-balancing flows
Gradient descent is based on discretizing a continuous-time flow, typically one that descends in a regularized loss function. But what if for all but the simplest types of regularizers we have been discretizing the wrong flow? This paper makes two contributions to our understanding of deep learning in feedforward networks with homogeneous activations functions (e.g., ReLU) and rescaling symmetries. The first is to describe a simple procedure for balancing the weights in these networks without changing the end-to-end functions that they compute. The second is to derive a continuous-time dynamics that preserves this balance while descending in the network's loss function. These dynamics reduce to an ordinary gradient flow for l2-norm regularization, but not otherwise. Put another way, this analysis suggests a canonical pairing of alternative flows and regularizers.
Recent papers
- D. Cai, C. Modi, C. C. Margossian, R. M. Gower, D. M. Blei, and L. K. Saul. EigenVI: score-based variational inference with orthogonal function expansions. In Advances in Neural Information Processing Systems 37 (NeurIPS-2024). (Spotlight presentation)
- D. Cai, C. Modi, L. Pillaud-Vivien, C. C. Margossian, R. M. Gower, D. M. Blei, and L. K. Saul (2024). Batch and match: black-box variational inference with a score-based divergence. In Proceedings of the 41st International Conference on Machine Learning (ICML-2024), pages 5258-5297. (Spotlight presentation)
- C. Modi, C. C. Margossian, Y. Yao, R. M. Gower, D. M. Blei and L. K. Saul (2023).
Variational inference with Gaussian score matching.
In Advances in Neural Information Processing Systems 36 (NeurIPS-2023), pages 29935-29950.
- L. K. Saul (2023).
Weight-balancing fixes and flows for deep learning.
Transactions on Machine Learning Research (09/2023).
- C. C. Margossian and L. K. Saul (2023).
The shrinkage-delinkage tradeoff: an analysis of factorized
Gaussian approximations for variational inference.
In Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI-2023), PMLR 216:1358-1367. (Oral presentation)
-
L. K. Saul (2022).
A geometrical connection between sparse and low-rank matrices and its application to manifold learning.
Transactions on Machine Learning Research (12/2022).
-
L. K. Saul (2022).
A nonlinear matrix decomposition for mining the zeros of sparse data.
SIAM Journal of Mathematics of Data Science 4(2):431-463.