Jingfeng Wu

I am a first year Ph.D. student at Johns Hopkins University, Computer Science Department. I am supervised by Prof. Vladimir Braverman.

Previously, I obtained my B.S. and M.S. at Peking University, School of Mathematical Sciences. During my Masters period, I was fortunate to work with Prof. Jinwen Ma and Prof. Zhanxing Zhu.

Email  /  CV  /  Google Scholar  /  Github

News
  • One paper will be orally presented in NeurIPS workshop, OPT-2019.
  • I start working at Johns Hopkins. Veritas vos liberabit! Ready to meet new challenges, and new collaborators.
Research

I am interested in mathematically motivated machine learning and deep learning. Currently, I am following cutting edge researches in: 1) theoretically machine learning, 2) stochastic algorithms, 3) generative models, 4) adversarial learning, 5) explainable computer vision.

b3do

Obtaining Regularization for Free via Iterate Averaging
Jingfeng Wu, Vladimir Braverman, Lin F. Yang
Workshop on Optimization for Machine Learning (OPT), 2019, oral

For general strongly convex and smooth losses, we can obtain regularization for free via properly averaging the optimization paths. The regularization is similar to weight decay.

b3do

The Multiplicative Noise in Stochastic Gradient Descent: Data-Dependent Regularization, Continuous and Discrete Approximation
Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Zhanxing Zhu
submitted
bibtex / arXiv

We re-interpret the noise in SGD from the perspective of random sampling, and obtain several novel results in regard to understand the regularization and approximation of SGD.

b3do

Tangent-Normal Adversarial Regularization for Semi-supervised Learning
Bing Yu*, Jingfeng Wu*, Jinwen Ma, Zhanxing Zhu
Conference on Computer Vision and Pattern Recognition (CVPR), 2019, oral
bibtex / arXiv / slides / poster / code

We present a novel manifold regularization method for semi-supervised learning, which is realized via adversarial training.

b3do

The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
Zhanxing Zhu*, Jingfeng Wu*, Bing Yu, Lei Wu, Jinwen Ma
International Conference on Machine Learning (ICML), 2019
bibtex / arXiv / slides / poster

We study the noise structure of stochastic gradient descent, and demonstrate its benefits on helping the dynamic escaping from sharp minima.

Experience
Notes

Template: this