讲座信息

讲座信息

您当前的位置: 讲座信息
20181126 王闯:Understanding stochastic gradient methods for high-dimensional inference: Dynamics, Guarantees, and Optimality
时间:2018-11-20


报告时间:2018年11月26日 16:00-17:00

报告地点:明德主楼1016会议室

报告题目:Understanding stochastic gradient methods

       for high-dimensional inference: Dynamics, Guarantees, and Optimality


报告摘要

Inference and learning from high-dimensional data are at the heart of modern signal processing and machine learning. Stochastic gradient-based algorithms have achieved surprisingly good performance for many convex and non-convex problems, as they are fast and memory-efficient. However, their empirical success highly depends on the careful choices of hyper-parameters, such as the learning rates. A deep understanding of why and when stochastic gradient methods work or do not work is still not quite clear, and strong theoretical guarantees are also limited.


  In this talk, I will present a rigorous framework for analyzing the exact dynamics of these algorithms in the high-dimensional limit with applications ranging from nonlinear regression to more challenging tasks such as subspace tracking using partially observed data, independent component analysis, and the training of generative adversarial networks. Leveraging tools from statistical physics and high-dimensional probability theory, we show that one can use a few ordinary differential equations or partial differential equations to precisely characterize the limiting dynamics associated with these stochastic gradient methods.

  Based on this analysis, we provide a useful insight that in the high-dimensional limit, the original coupled dynamics associated with the algorithms will be asymptotically “decoupled”, with each coordinate independently solving a 1-D effective minimization problem via stochastic gradient descent. Exploiting this insight to design new algorithms for achieving optimal trade-offs between computational and statistical efficiency proves an interesting line of research.



报告人简历

  Chuang Wang received his Ph.D. degree in Theoretical Physics from the Institute of Theoretical Physics, Chinese Academy of Science, Beijing, China, in 2015. He then joined the Paulson School of Engineering and Applied Sciences at Harvard University, first as a Postdoctoral Fellow (Feb 2015 - Jan 2018) and more recently as a Research Associate (Feb 2018 - now) in the Signals, Information, and Networks Group. His research interests include theoretical aspects of high-dimensional signal and information processing; machine learning; imaging and imaging processing; probabilistic graphical models, physics-inspired optimization algorithms. He won the Best Student Paper Award at the IEEE GlobalSip Conference in 2014.