Jingfeng Wu / 吴京风

I am a Ph.D. candidate (2019 - now) at Johns Hopkins University, Computer Science Department. I am supervised by Prof. Vladimir Braverman. Previously I obtained B.S. (Mathematics, 2012 - 2016) and M.S. (Applied Math, 2016 - 2019) from Peking University, School of Mathematical Sciences.

During summer 2022, I was fortunate to intern at Google Research. My hosts were Wennan Zhu and Peter Kairuoz.

Email  /  CV  /  Google Scholar  /  Github  /  Twitter

News
  • [09/2022] Looking for a post doc position starting from summer 2023. Please contact me if interested!
  • [09/2022] Two papers are accepted to NeurIPS 2022!
  • See More
  • [06/2022] Interning at Google Research Seattle during summer 2022.
  • [05/2022] One paper is accepted to ICML 2022 as long presentation!
  • [01/2022] One paper is accepted to AISTATS 2022.
  • [12/2021] Passed the PhD candidacy exam.
  • [09/2021] Two papers are accepted to NeurIPS 2021.
  • [09/2021] Talk on problem-dependent bound of constant-stepsize SGD at JHU CS Theory Seminar.
  • [05/2021] Awarded MINDS Summer Data Science Fellowship. Many thanks to the committee.
  • [05/2021] One paper is accepted to COLT 2021.
  • [03/2021] Talk on the implicit bias of SGD at UCLA.
  • [02/2021] In a relationship with Yuan, happy Valentine's day~
  • [01/2021] One paper is accepted to ICLR 2021.
  • [12/2020] Talk on the implicit bias of SGD at JHU CS Theory Seminar.
  • [10/2020] One paper will be presented at Workshop on Optimization for Machine Learning (OPT).
  • [05/2020] Two papers get accepted to ICML 2020.
  • [03/2020] Stay home stay safe.
  • [11/2019] One paper will be orally presented at Workshop on Optimization for Machine Learning (OPT).
  • [09/2019] Starting working at Hopkins. Veritas vos liberabit!
  • [06/2019] Graduated from Peking University.
  • [04/2019] One paper is accepted to ICML 2019.
  • [03/2019] One paper is accepted to CVPR 2019 as an oral presentation.
  • [12/2018] Looking for a Ph.D. position in machine learning / statistical learning starting from fall, 2019.
Research

My research focuses on providing theoretical understanding to machine learning problems stemming from practice.

  • I am interested in understanding how algorithm "adapts" to problem in the statistical learning context, with a focus on studying algorithm-problem-dependent performance guarantees.
  • I am also interested in developing efficient learning algorithms, with an aim to connect theoretical guarantees with practical motivations.
Key Words: Machine Learning, Reinforcement Learning, Stochastic Gradient Descent, Sketching

Preprints
Publications
b3do The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift
Jingfeng Wu*, Difan Zou*, Vladimir Braverman, Quanquan Gu, Sham M. Kakade
Conference on Neural Information Processing Systems (NeurIPS), 2022
bibtex / arXiv / poster
b3do Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime
Difan Zou*, Jingfeng Wu*, Vladimir Braverman, Quanquan Gu, Sham M. Kakade
Conference on Neural Information Processing Systems (NeurIPS), 2022
bibtex / arXiv
b3do Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression
Jingfeng Wu*, Difan Zou*, Vladimir Braverman, Quanquan Gu, Sham M. Kakade
International Conference on Machine Learning (ICML), 2022, long presentation
bibtex / arXiv / slides / poster
b3do Gap-Dependent Unsupervised Exploration for Reinforcement Learning
Jingfeng Wu, Vladimir Braverman, Lin F. Yang
International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
bibtex / arXiv / slides / poster / code
b3do The Benefits of Implicit Regularization from SGD in Least Squares Problems
Difan Zou*, Jingfeng Wu*, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade
Conference on Neural Information Processing Systems (NeurIPS), 2021
bibtex / arXiv
b3do Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning
Jingfeng Wu, Vladimir Braverman, Lin F. Yang
Conference on Neural Information Processing Systems (NeurIPS), 2021
bibtex / arXiv / slides / poster / code
b3do Lifelong Learning with Sketched Structural Regularization
Haoran Li, Aditya Krishnan, Jingfeng Wu, Soheil Kolouri, Praveen K. Pilly, Vladimir Braverman
Asian Conference on Machine Learning (ACML), 2021
bibtex / arXiv
b3do Benign Overfitting of Constant-Stepsize SGD for Linear Regression
Difan Zou*, Jingfeng Wu*, Vladimir Braverman, Quanquan Gu, Sham M. Kakade
Annual Conference on Learning Theory (COLT), 2021
bibtex / arXiv / slides
b3do Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate
Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu
International Conference on Learning Representations (ICLR), 2021
bibtex / arXiv / slides / poster
b3do Obtaining Adjustable Regularization for Free via Iterate Averaging
Jingfeng Wu, Vladimir Braverman, Lin F. Yang
International Conference on Machine Learning (ICML), 2020
bibtex / arXiv / slides / code
b3do On the Noisy Gradient Descent that Generalizes as SGD
Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Vladimir Braverman, Zhanxing Zhu
International Conference on Machine Learning (ICML), 2020
bibtex / arXiv / slides / code
b3do Tangent-Normal Adversarial Regularization for Semi-supervised Learning
Bing Yu*, Jingfeng Wu*, Jinwen Ma, Zhanxing Zhu
Conference on Computer Vision and Pattern Recognition (CVPR), 2019, oral
bibtex / arXiv / slides / poster / code
b3do The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
Zhanxing Zhu*, Jingfeng Wu*, Bing Yu, Lei Wu, Jinwen Ma
International Conference on Machine Learning (ICML), 2019
bibtex / arXiv / slides / poster / code
Talks
Learning theory seminar, Google, August 2022
A Fine-Grained Characterization for the Implicit Bias of SGD in Least Square Problems
Math Machine Learning seminar, MPI MiS + UCLA, June 2022
A Fine-Grained Characterization for the Implicit Bias of SGD in Least Square Problems
Services
Conference Reviewer
ICML 2020 - 2022
NeurIPS 2020 - 2022
ICLR 2021 - 2023
AISTATS 2021 - 2022
PC Member
AAAI 2021 - 2023
Journal Reviewer
TPAMI
JMLR
TMLR
Awards
ICML 2021 Best Reviewers (Top 10%)
JHU MINDS 2021 Summer Data Science Fellowship

Template: this