Jingfeng Wu / 吴京风

I am a Ph.D. student (2019 - now) at Johns Hopkins University, Computer Science Department. I am supervised by Prof. Vladimir Braverman. Previously I obtained B.S. (Mathematics, 2012 - 2016) and M.S. (Applied Math, 2016 - 2019) from Peking University, School of Mathematical Sciences.

Email  /  CV  /  Google Scholar  /  Github  /  Twitter

News
  • [Sep. 2021] Two papers are accepted to NeurIPS 2021.
  • See More
  • [Sep. 2021] Talk on problem-dependent bound of constant-stepsize SGD at JHU CS Theory Seminar.
  • [May 2021] Awarded MINDS Summer Data Science Fellowship. Many thanks to the committee.
  • [May 2021] One paper is accepted to COLT 2021.
  • [Mar. 2021] Talk on the implicit bias of SGD at UCLA.
  • [Feb. 2021] In a relationship with Yuan, happy Valentine's day~
  • [Jan. 2021] One paper is accepted to ICLR 2021.
  • [Nov. 2020] Talk on the implicit bias of SGD at JHU CS Theory Seminar.
  • [Oct. 2020] One paper will be presented at Workshop on Optimization for Machine Learning (OPT).
  • [May 2020] Two papers get accepted to ICML 2020.
  • [Mar. 2020] Stay home stay safe.
  • [Nov. 2019] One paper will be orally presented at Workshop on Optimization for Machine Learning (OPT).
  • [Sep. 2019] Starting working at Hopkins. Veritas vos liberabit!
  • [Jun. 2019] Graduated from Peking University.
  • [Apr. 2019] One paper is accepted to ICML 2019.
  • [Mar. 2019] One paper is accepted as an oral presentation at CVPR 2019.
  • [Dec. 2018] Looking for a Ph.D. position in machine learning / statistical learning starting from fall, 2019.
Research

I am interested in mathematical aspects of machine learning algorithms. From the theory side, I work on understanding the algorithmic bias and the algorithmic complexity. From the application side, I develop algorithms with provable guarantees. Currently I work a lot with SGD and MDP.

b3do Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression
Jingfeng Wu*, Difan Zou*, Vladimir Braverman, Quanquan Gu, Sham M. Kakade
arXiv, 2021
bibtex / arXiv

We prove problem-dependent excess risk bounds for decaying-stepsize SGD for overparameterized linear regression problems.

b3do The Benefits of Implicit Regularization from SGD in Least Squares Problems
Difan Zou*, Jingfeng Wu*, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade
Conference on Neural Information Processing Systems (NeurIPS), 2021
bibtex / arXiv

We show that in a broad class of interesting least square instances, SGD is always nearly as good as ridge regression, but ridge regression could be much worse than SGD.

b3do Gap-Dependent Unsupervised Exploration for Reinforcement Learning
Jingfeng Wu, Vladimir Braverman, Lin F. Yang
ICML Workshop on Reinforcement Learning Theory, 2021
bibtex / arXiv / poster

We show an improved sample complexity for unsupervised reinforcement learning when the problem instances have constant sub-optimality gap.

b3do Lifelong Learning with Sketched Structural Regularization
Haoran Li, Aditya Krishnan, Jingfeng Wu, Soheil Kolouri, Praveen K. Pilly, Vladimir Braverman
Asian Conference on Machine Learning (ACML), 2021
bibtex / arXiv

We show sketching methods improve structural regularization algorithms for lifelong learning.

b3do Benign Overfitting of Constant-Stepsize SGD for Linear Regression
Difan Zou*, Jingfeng Wu*, Vladimir Braverman, Quanquan Gu, Sham M. Kakade
Annual Conference on Learning Theory (COLT), 2021
bibtex / arXiv / slides

We study the generalization bounds of SGD for overparameterized linear regression problems.

b3do Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning
Jingfeng Wu, Vladimir Braverman, Lin F. Yang
Conference on Neural Information Processing Systems (NeurIPS), 2021
bibtex / arXiv / slides / poster / code

We study the regret bound and sample complexity for multi-objective reinforcement learning with potentially adversarial preferences.

b3do Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate
Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu
International Conference on Learning Representations (ICLR), 2021
bibtex / arXiv / slides / poster

We show a directional bias for SGD with moderate learning rate. This particular effect cannot be achieved by GD or SGD with small learning rate.

b3do Obtaining Adjustable Regularization for Free via Iterate Averaging
Jingfeng Wu, Vladimir Braverman, Lin F. Yang
International Conference on Machine Learning (ICML), 2020
bibtex / arXiv / slides / code

We show that an l2-type regularization effect could be achieved via properly averaging an optimization path in many cases.

b3do On the Noisy Gradient Descent that Generalizes as SGD
Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Vladimir Braverman, Zhanxing Zhu
International Conference on Machine Learning (ICML), 2020
bibtex / arXiv / slides / code

For gradient methods with noises, we show that the distribution classes of the noises do not affect their regularization effects.

b3do Tangent-Normal Adversarial Regularization for Semi-supervised Learning
Bing Yu*, Jingfeng Wu*, Jinwen Ma, Zhanxing Zhu
Conference on Computer Vision and Pattern Recognition (CVPR), 2019, oral
bibtex / arXiv / slides / poster / code

We present a novel manifold regularization method for semi-supervised learning. The regularizer is realized via adversarial training.

b3do The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
Zhanxing Zhu*, Jingfeng Wu*, Bing Yu, Lei Wu, Jinwen Ma
International Conference on Machine Learning (ICML), 2019
bibtex / arXiv / slides / poster / code

We study the noise structure of stochastic gradient descent, and demonstrate its benefits on helping the dynamic escape from sharp minima.

Services
Reviewer
ICML 2020 - 2021
NeurIPS 2020 - 2021
ICLR 2021 - 2022
AISTATS 2021 - 2022
PC Member
AAAI 2021 - 2022
Journal Reviewer
TPAMI
Awards
  • ICML 2021 Best Reviewers (Top 10%)
  • JHU MINDS 2021 Summer Data Science Fellowship
Experiences
Aug. 2019 - Now
Research Assistant at Johns Hopkins University
Dec. 2018 - May 2019
Research Intern at Baidu Big Data Lab
Jul. 2017 - Dec. 2018
Research Assistant at Peking University

Template: this