FLUTE is a simulation framework for running large-scale offline federated learning algorithms
Microsoft Research has recently released Federated Learning Utilities and Tools for Experimentation (FLUTE), a new simulation framework to accelerate federated learning ML algorithm development. It opens a new direction in federated learning algorithm design and experimentation by allowing engineers and researchers to design and simulate new algorithms before development and deployment.
FLUTE is a simulation framework for running large-scale offline federated learning algorithms. The main goal of federated learning is to train complex machine-learning models over massive amounts of data without sharing that data in a centralised location. This approach uploads the initial global model on each device with limited computational power capacity. Data on each device is used to provide small updates on the model. These new small changes are transferred to the centralised system for aggregation.
These steps are iteratively repeated until there is no significant change in the global model.
Despite the flexibility that it may bring, like the workload distribution, the framework poses a challenge on how to manage many moving chunks of data for training and related privacy for each end-node. FLUTE tries to address these problems by allowing researchers and developers to test and experiment with mentioned constraints like data privacy, communication strategy, and scalability before implementing and launching models in production.
FLUTE integrates well with Azure ML and is based on Python and PyTorch. First, the server sends the global model to the clients. Clients using the data will send the pseudo-gradient generated locally back to the server to do the aggregation. The new global model will be updated for the clients and these steps are repeated till it converges to the optimal one. The framework supports diverse federated learning configurations, including standardised implementations like DGA and FedAvg.
FLUTE is available for the public on GitHub. It comes with the essential tools to start experimenting. The community can look at the FLUTE architecture video, more documents and the published FLUTE paper. Microsoft Research is also working on optimising algorithmic enhancements, support for additional communication protocols and easier ways to link with Azure ML for future releases. Other federated learning frameworks for distributed training and deploying new ML models include Tensorflow Federated and IBM federated learning.