FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
-
Updated
Sep 3, 2022
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
All materials you need for Federated Learning: blogs, videos, papers, and softwares, etc.
Federated Optimization in Heterogeneous Networks (MLSys '20)
Fair Resource Allocation in Federated Learning (ICLR '20)
FedTorch is a generic repository for benchmarking different federated and distributed learning algorithms using PyTorch Distributed API.
Symbolic Continuous-Time Gaussian Belief Propagation Framework with Ceres Interoperability
Distributed Linear Programming Solver on top of Apache Spark
DISROPT: A Python framework for distributed optimization
This library is an implementation of the algorithm described in Distributed Trajectory Estimation with Privacy and Communication Constraints: a Two-Stage Distributed Gauss-Seidel Approach.
Implementation of (overlap) local SGD in Pytorch
A package for solving optimal power flow problems using distributed algorithms.
FedDANE: A Federated Newton-Type Method (Asilomar Conference on Signals, Systems, and Computers ‘19)
Communication-efficient decentralized SGD (Pytorch)
Scalable, structured, dynamically-scheduled hyperparameter optimization.
A ray-based library of Distributed POPulation-based OPtimization for Large-Scale Black-Box Optimization.
MATLAB implementation of the paper "Distributed Optimization of Average Consensus Containment with Multiple Stationary Leaders" [arXiv 2022].
Sparse Convex Optimization Toolkit (SCOT)
tvopt is a prototyping and benchmarking Python framework for time-varying (or online) optimization.
MATLAB implementation of the paper "Online Distributed Optimal Power Flow with Equality Constraints" [arXiv 2022].
We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while maintaining the performance of vanilla SGD. We empirically evaluate the performance of the compression methods by training deep neural networks on the CIFAR10 dataset.
Add a description, image, and links to the distributed-optimization topic page so that developers can more easily learn about it.
To associate your repository with the distributed-optimization topic, visit your repo's landing page and select "manage topics."