Springer, 2020. — 285 p. — ISBN: 9811529094.
This book on
optimization includes forewords by Michael I. Jordan, Zongben Xu and Zhi-Quan Luo. Machine learning relies heavily on optimization to solve problems with its learning models, and
first-order optimization algorithms are the
mainstream approaches. The acceleration of first-order optimization algorithms is crucial for the efficiency of machine learning. Written by leading experts in the field, this book provides a comprehensive introduction to, and
state-of-the-art review of accelerated first-order optimization algorithms for machine learning. It discusses a variety of methods, including
deterministic and stochastic algorithms, where the algorithms can be synchronous or asynchronous, for unconstrained and constrained problems, which can be
convex or non-convex. Offering a rich blend of ideas, theories
and proofs, the book is up-to-date and self-contained. It is an excellent reference resource for users who are seeking
faster optimization algorithms, as well as for graduate students and researchers wanting to grasp the frontiers of optimization in machine learning in a short time.
Foreword by Michael I. Jordan.
Foreword by Zongben Xu.
Foreword by Zhi-Quan Luo.
Acknowledgements.
About the Authors.
Accelerated Algorithms for Unconstrained Convex Optimization.
Accelerated Algorithms for Constrained Convex Optimization.
Accelerated Algorithms for Nonconvex Optimization.
Accelerated Stochastic Algorithms.
Accelerated Parallel Algorithms.
Conclusions.
A Mathematical Preliminaries.
True PDF