Work

Distributed Optimization Methods In Large-Scale Systems With Realistic Constraints

Public

Originally motivated by the emergence of networked systems lacking central coordination such as multiprocessors, wireless sensor networks and smart grids, the study of distributed optimization algorithms has been an active field of research spanning multiple decades. More recently, the rapid growth in the availability of high-dimensional datasets has posed the problem of learning efficiently and securely from data located across thousands of devices. In addition, distributed optimization for networks of mobile agents has been gaining significant traction over the last few years due to advancements in robotics and autonomous vehicles research. However, large-scale learning and mobile agent systems have inherent challenges that are typically overlooked in distributed optimization literature. This thesis aims to address some of these concerns. In the first part of this thesis, we propose and analyze a first order distributed method (S-NEAR-DGD) which utilizes cost-efficient stochastic gradient approximations and can tolerate inexact communication to alleviate the problems of excessive gradient computation costs and communication bottlenecks in large-scale Machine Learning. S-NEAR-DGD is based on a class of flexible deterministic algorithms (NEAR-DGD) that permit adjusting the amounts of communication and computation performed to best accommodate the application environment. Under strong convexity and Lipschitz gradient continuity, we show the linear convergence of S-NEAR-DGD to an error neighborhood of the optimal solution. Moreover, we provide numerical results demonstrating that S-NEAR-DGD is robust to types of inexact communication which may cause other state-of-the-art methods to diverge. In the second part of this thesis, we consider the setting of nonconvex distributed optimization which features prominently in machine learning applications. Obtaining convergence guarantees is particularly challenging for nonconvex problems. Utilizing novel Lyapunov functions and under weaker assumptions compared to existing works on the same topic, we prove convergence of the iterates of the NEAR-DGD method to critical points. Moreover, we employ results stemming from dynamical system theory to demonstrate that NEAR-DGD almost always avoids strict saddle points and thus likely converges to minimizers. Our numerical results are promising and indicate that NEAR-DGD performs competitively against state-of-the-art methods. In the last part of this thesis, we consider the multi-agent rendezvous problem, i.e. guiding of a group of agents to a common meeting point, with applications in multi-robot and multi-vehicle networks. We treat rendezvous as a distributed consensus optimization problem and develop a fully asynchronous algorithm that can handle any number of agents and spaces of any dimension, and which provably converges to an arbitrarily small neighborhood of the optimal rendezvous point. Our method is robust to outdated information and to potentially erroneous displacements caused by the continuously moving nature of robotic agents.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items