I am a Senior Research Scientist in the
Center for Computational Mathematics (CCM)
at the
Flatiron Institute.
I am also part of a much larger research effort in the area of
machine learning.
I work broadly across the areas of high dimensional data analysis, latent variable modeling, deep learning, variational inference, optimization, and kernel methods,
and within CCM, I am attempting to build a group with diverse backgrounds and interests. Before joining Flatiron, I was a research scientist at
AT&T Labs
and a faculty member at UPenn and
UC San Diego.
I also served previously as EditorinChief of JMLR and as program chair for
NeurIPS.
I obtained my PhD in Physics from MIT, with a thesis on exact computational methods in
the statistical mechanics of disordered systems.
Representative Projects
Manifold learning
The goal of manifold learning is to discover a similaritypreserving mapping of high dimensional inputs into a lower dimensional space. In these problems there typically arise two different types of matrices: there are sparse matrices, whose nonzero elements encode which inputs are to be regarded as similar, and there are lowrank matrices whose nonzero eigenvalues indicate the dimensionality required for a faithful embedding. In recent work we have shown that these different matrices — one sparse (S), one lowrank (L) — can be mathematically connected by an elementwise rectified linearity: namely, S = max(0,L). This work also suggests how manifold learning might serve as a layerwise prescription for unsupervised learning in ReLU neural networks.
Sparse representation learning
We are investigating a generalized family of feedforward neural networks whose inputoutput mappings are positive homogeneous functions of degree one. This form of inductive bias has two advantages for deep learning. First, it leads to intensityequivariant representations of sensory inputs (e.g., images, sounds) in which the network activations at all layers scale linearly with the intensity of these stimuli. Second, it yields more interpretable classifiers in which successively sparser representations of negatively labeled patterns emerge from each layer of processing.
Recent papers

L. K. Saul (2022).
A geometrical connection between sparse and lowrank matrices and its application to manifold learning.
Transactions on Machine Learning Research.
PDF

L. K. Saul (2022).
A nonlinear matrix decomposition for mining the zeros of sparse data.
SIAM Journal of Mathematics of Data Science 4(2):431463.
PDF

L. K. Saul (2021). An online passiveaggressive algorithm for differenceofsquares classification.
In M. A. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan (eds.),
Advances in Neural Information Processing Systems 34,
pages 2142621439.
PDF

L. K. Saul (2021). An EM algorithm for capsule regression. Neural Computation 33(1):194226
PDF

L. K. Saul (2020). A tractable latent variable model for nonlinear dimensionality reduction.
Proceedings of the National Academy of Sciences USA 117(27):1540315408.
PDF
Older (representative) papers

D.K. Kim, G. Voelker, L. K. Saul (2013).
A variational approximation for topic modeling of hierarchical corpora.
Proceedings of the 30th International Conference on Machine Learning (ICML13),
pages 5563.
PDF

Y. Cho and L. K. Saul (2009).
Kernel methods for deep learning.
In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (eds.),
Advances in Neural Information Processing Systems 22, pages 342350.
PDF

K. Q. Weinberger and L. K. Saul (2009).
Distance metric learning for large margin nearest neighbor classification.
Journal of Machine Learning Research 10:207244.
PDF

F. Sha, Y. Lin, L. K. Saul, and D. D. Lee (2007).
Multiplicative updates for nonnegative quadratic programming.
Neural Computation 19(8):20042031.
PDF

K. Q. Weinberger and L. K. Saul (2006).
Unsupervised learning of image manifolds by semidefinite programming.
International Journal of Computer Vision 70(1):7790.
PDF
 S. T. Roweis and L. K. Saul (2000).
Nonlinear dimensionality reduction by locally linear embedding.
Science 290:23232326.
PDF
 M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul (1999).
An introduction to variational methods for graphical models.
Machine Learning 37:183233.
PDF
All papers