Følg
Zhize Li
Zhize Li
Assistant Professor, Singapore Management University
Verifisert e-postadresse på smu.edu.sg - Startside
Tittel
Sitert av
Sitert av
År
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Z Li, D Kovalev, X Qian, P Richtárik
International Conference on Machine Learning (ICML 2020), 2020
1622020
PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization
Z Li, H Bao, X Zhang, P Richtárik
International Conference on Machine Learning (ICML 2021), 2020
1352020
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
Z Li, J Li
Neural Information Processing Systems (NeurIPS 2018), 2018
1222018
MARINA: Faster Non-Convex Distributed Learning with Compression
E Gorbunov, K Burlachenko, Z Li, P Richtárik
International Conference on Machine Learning (ICML 2021), 2021
1182021
A unified variance-reduced accelerated gradient method for convex optimization
G Lan*, Z Li*, Y Zhou*
Neural Information Processing Systems (NeurIPS 2019), 2019
722019
Learning Two-layer Neural Networks with Symmetric Inputs
R Ge*, R Kuditipudi*, Z Li*, X Wang*
International Conference on Learning Representations (ICLR 2019), 2019
672019
Gradient Boosting With Piece-Wise Linear Regression Trees
Y Shi, J Li, Z Li
International Joint Conference on Artificial Intelligence (IJCAI 2019), 2019
622019
On Top-k Selection in Multi-Armed Bandits and Hidden Bipartite Graphs
W Cao, J Li, Y Tao, Z Li
Neural Information Processing Systems (NIPS 2015), 2015
622015
EF21 with bells & whistles: Practical algorithmic extensions of modern error feedback
I Fatkhullin, I Sokolov, E Gorbunov, Z Li, P Richtárik
arXiv preprint arXiv:2110.03294, 2021
512021
SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points
Z Li
Neural Information Processing Systems (NeurIPS 2019), 2019
502019
Optimal in-place suffix sorting
Z Li, J Li, H Huo
Information and Computation, 2022 [arXiv:1610.08305], 2016
48*2016
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Z Li, P Richtárik
arXiv preprint arXiv:2006.07013, 77, 2020
462020
BEER: Fast Rate for Decentralized Nonconvex Optimization with Communication Compression
H Zhao, B Li, Z Li, P Richtárik, Y Chi
Neural Information Processing Systems (NeurIPS 2022), 2022
422022
SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression
Z Li, H Zhao, B Li, Y Chi
Neural Information Processing Systems (NeurIPS 2022), 2022
362022
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
R Ge*, Z Li*, W Wang*, X Wang*
Conference on Learning Theory (COLT 2019), 2019
362019
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation
P Richtárik, I Sokolov, I Fatkhullin, E Gasanov, Z Li, E Gorbunov
International Conference on Machine Learning (ICML 2022), 2022
312022
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
H Zhao, Z Li, P Richtárik
arXiv preprint arXiv:2108.04755, 2021
312021
Stochastic gradient hamiltonian monte carlo with variance reduction for bayesian inference
Z Li, T Zhang, S Cheng, J Zhu, J Li
Machine Learning 108, 1701-1727, 2019
312019
ZeroSARAH: Efficient nonconvex finite-sum optimization with zero full gradient computation
Z Li, S Hanzely, P Richtárik
arXiv preprint arXiv:2103.01447, 2021
302021
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
Z Li, P Richtárik
Neural Information Processing Systems (NeurIPS 2021), 2021
292021
Systemet kan ikke utføre handlingen. Prøv på nytt senere.
Artikler 1–20