Work

Distributed Optimization in Open Networks with Faults

Public

The field of distributed optimization algorithms is wide and growing, and new techniques for categorizing and analyzing existing algorithms are continually being developed. In this Thesis, we leverage control theoretic analysis and block diagram interpretations for existing algorithms to synthesize entirely new families of algorithms. These new algorithm families have many interesting and useful properties --- namely the fact that they are self-healing, meaning that their convergence to the correct optimizer can be guaranteed even if they are initialized randomly,agents join or leave the network, local cost functions change, or packets are lost. In the setting of dynamic average consensus, we are able to exploit the self-healing properties of our algorithm to guarantee both accuracy (i.e., the exact asymptotic computation of the global average) and privacy (i.e., no node can estimate another node’s reference value). To achieve this,we require that the digraph modeling the communication between nodes satisfy certain topological conditions. We describe a parameterized family of first-orderdistributed optimization algorithms based on SVL that enable a network of agents to collaboratively calculate a decision variable that minimizes the sum of cost functions at each agent. Our algorithms are the first single-Laplacian methods for distributed convex optimization to be entirely self-healing. We achieve self-healing by sacrificing internal stability, a fundamental trade-off for single-Laplacian Gradient-Tracking methods. We expand our self-healing methods to algorithms based on DOGT and ADMM that can potentially minimize non-convex or non-smooth objectives respectively.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items