Jiequn Han (韩劼群)

Flatiron Research Fellow
Center for Computational Mathematics
Flatiron Institute

162 5th Avenue
New York, NY 10010
Email: jhan (at) flatironinstitute (dot) org

portrait


About Me

I am a Flatiron Research Fellow at the Center for Computational Mathematics, Flatiron Institute. Previously, I worked as an Instructor of Mathematics at the Department of Mathematics, Princeton University. I obtained my Ph.D. degree in applied mathematics from the Program in Applied and Computational Mathematics (PACM), Princeton University in June 2018, advised by Prof. Weinan E. Before that, I received my Bachelor degree from the School of Mathematical Sciences, Peking University in July 2013.

My research draws inspiration from various disciplines of science and is devoted to solving high-dimensional problems arising from scientific computing. My current research interests mainly focus on solving high-dimensional partial differential equations and machine learning based-multiscale modeling. I did a research internship in DeepMind during the summer of 2017, under the mentorship of Thore Graepel.

Here are my CV and some related links: Google Scholar profile, ResearchGate profile.


News


Preprints

  1. A class of dimensionality-free metrics for the convergence of empirical measures,
    Jiequn Han, Ruimeng Hu, Jihao Long,
    arXiv preprint, (2021). [arXiv]

  2. An L2 analysis of reinforcement learning in high dimensions with kernel and neural network approximation,
    Jihao Long, Jiequn Han, Weinan E,
    arXiv preprint, (2021). [arXiv]

  3. Frame-independent vector-cloud neural network for nonlocal constitutive modelling on arbitrary grids,
    Xu-Hui Zhou, Jiequn Han, Heng Xiao,
    arXiv preprint, (2021). [arXiv]

  4. Actor-critic method for high dimensional static Hamilton--Jacobi--Bellman partial differential equations based on neural networks,
    Mo Zhou, Jiequn Han, Jianfeng Lu,
    arXiv preprint, (2021). [arXiv]

  5. Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning,
    Weinan E, Jiequn Han, Arnulf Jentzen,
    arXiv preprint, (2020). [arXiv] [website]

  6. Convergence of deep fictitious play for stochastic differential games,
    Jiequn Han, Ruimeng Hu, Jihao Long,
    arXiv preprint, (2020). [arXiv]

  7. Perturbed gradient descent with occupation time,
    Xin Guo, Jiequn Han, Wenpin Tang,
    arXiv preprint, (2020). [arXiv]

  8. Universal approximation of symmetric and anti-symmetric functions,
    Jiequn Han, Yingzhou Li, Lin Lin, Jianfeng Lu, Jiefu Zhang, Linfeng Zhang,
    arXiv preprint, (2019). [arXiv]


Publications

  1. Recurrent neural networks for stochastic control problems with delay,
    Jiequn Han, Ruimeng Hu,
    Mathematics of Control, Signals, and Systems, in press. [arXiv] [code]

  2. Optimal policies for a pandemic: A stochastic game approach and a deep learning algorithm,
    Yao Xuan, Robert Balkin, Jiequn Han, Ruimeng Hu, Hector D Ceniceros,
    Mathematical and Scientific Machine Learning Conference (MSML) (2021), in press. [arXiv]

  3. Global convergence of policy gradient for linear-quadratic mean-field control/game in continuous time,
    Weichen Wang, Jiequn Han, Zhuoran Yang, Zhaoran Wang,
    International Conference on Machine Learning (ICML), (2021). [proceedings] [arXiv]

  4. Learning nonlocal constitutive models with neural networks,
    Xu-Hui Zhou, Jiequn Han, Heng Xiao,
    Computer Methods in Applied Mechanics and Engineering, 384, 113927 (2021). [journal] [arXiv] [code]

  5. Machine-learning-assisted modeling,
    Weinan E, Jiequn Han, Linfeng Zhang,
    Physics Today, 74, 7, 36 (2021). [journal] [an early and long version on arXiv]

  6. On the curse of memory in recurrent neural networks: approximation and optimization analysis,
    Zhong Li, Jiequn Han, Weinan E, Qianxiao Li,
    International Conference on Learning Representations (ICLR), (2021). [OpenReview]

  7. Income and wealth distribution in macroeconomics: A continuous-time approach,
    Yves Achdou, Jiequn Han, Jean-Michel Lasry, Pierre-Louis Lions, Benjamin Moll,
    The Review of Economic Studies (2021). [journal] [NBER]

  8. Machine learning moment closures for accurate and efficient simulation of polydisperse evaporating sprays,
    James B. Scoggins, Jiequn Han, Marc Massot,
    AIAA Scitech 2021 Forum, 1786 (2021). [proceedings]

  9. Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach,
    Jiequn Han, Jianfeng Lu, Mo Zhou,
    Journal of Computational Physics, 423, 109792 (2020). [journal] [arXiv] [code]

  10. Deep fictitious play for finding Markovian Nash equilibrium in multi-agent games,
    Jiequn Han, Ruimeng Hu,
    Mathematical and Scientific Machine Learning Conferenc(MSML), PMLR 107:221-245 (2020). [proceedings] [arXiv]

  11. Convergence of the deep BSDE method for coupled FBSDEs,
    Jiequn Han, Jihao Long,
    Probability, Uncertainty and Quantitative Risk, 5(1), 1-33 (2020). [journal] [arXiv]

  12. Uniformly accurate machine learning-based hydrodynamic models for kinetic equations,
    Jiequn Han, Chao Ma, Zheng Ma, Weinan E,
    Proceedings of the National Academy of Sciences, 116(44) 21983-21991 (2019). [journal] [arXiv]

  13. Solving many-electron Schrödinger equation using deep neural networks,
    Jiequn Han, Linfeng Zhang, Weinan E,
    Journal of Computational Physics, 399, 108929 (2019). [journal] [arXiv]

  14. A mean-field optimal control formulation of deep learning,
    Weinan E, Jiequn Han, Qianxiao Li,
    Research in the Mathematical Sciences, 6:10 (2019). [journal] [arXiv]

  15. End-to-end symmetry preserving inter-atomic potential energy model for finite and extended systems,
    Linfeng Zhang, Jiequn Han, Han Wang, Wissam A. Saidi, Roberto Car, Weinan E,
    Conference on Neural Information Processing Systems (NeurIPS), (2018). [proceedings] [arXiv] [website] [code]

  16. Solving high-dimensional partial differential equations using deep learning,
    Jiequn Han, Arnulf Jentzen, Weinan E,
    Proceedings of the National Academy of Sciences, 115(34), 8505-8510 (2018). [journal] [arXiv] [code]

  17. DeePCG: constructing coarse-grained models via deep neural networks,
    Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, Weinan E,
    The Journal of Chemical Physics, 149, 034101 (2018). [journal] [arXiv] [website]

  18. DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics,
    Han Wang, Linfeng Zhang, Jiequn Han, Weinan E,
    Computer Physics Communications, 228, 178-184 (2018). [journal] [arXiv] [website] [code]

  19. Deep Potential Molecular Dynamics: a scalable model with the accuracy of quantum mechanics,
    Linfeng Zhang, Han Wang, Jiequn Han, Roberto Car, Weinan E,
    Physical Review Letters 120(10), 143001 (2018). [journal] [arXiv] [website] [code]

  20. Deep Potential: a general representation of a many-body potential energy surface,
    Jiequn Han, Linfeng Zhang, Roberto Car, Weinan E,
    Communications in Computational Physics, 23, 629–639 (2018). [journal] [arXiv] [website]

  21. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations,
    Weinan E, Jiequn Han, Arnulf Jentzen,
    Communications in Mathematics and Statistics, 5, 349–380 (2017). [journal] [arXiv] [code]

  22. Deep learning approximation for stochastic control problems,
    Jiequn Han, Weinan E,
    Deep Reinforcement Learning Workshop, NIPS (2016). [arXiv]

  23. From microscopic theory to macroscopic theory: a systematic study on modeling for liquid crystals,
    Jiequn Han, Yi Luo, Zhifei Zhang, Pingwen Zhang,
    Archive for Rational Mechanics and Analysis, 215, 741–809 (2015). [journal] [arXiv]