Distributed optimization methods with centralized or decentralized computation strategies are virtually inevitable to tackle the challenges arising from large-scale machine learning problems.
ADMM LAB MASTER SERIAL
In recent years, the exponential growth of digital data opened new challenges for traditional serial methods concerning computation and storage, or in short scalability. The results show that the performance of decentralized ADMM-based learning of SVMs in terms of convergence is improved using graphs with large spectral gaps, higher and homogeneous degrees. Additionally, we provide an implementation that makes these theoretical advances easily available. We furthermore suggest which topology is preferable. In particular, we investigate to which degree the expansion property of the network influences the convergence in terms of iterations, training and communication time. We investigate the impact of network topology on the performance of an ADMM-based learning of Support Vector Machine using expander, and mean-degree graphs, and additionally some of the common modern network topologies. We consider decentralized consensus-based ADMM in which nodes may only communicate with one-hop neighbors. The Alternating Direction Method of Multipliers (ADMM) is a popular and promising distributed framework for solving large-scale machine learning problems.