Basic PPFL Training#

APPFL provides users with the capabilities of simulating and training PPFL on either a single machine, a cluster, or multiple heterogeneous machines. We refer

  • simulation as running PPFL experiments on a single machine or a cluster without actual data decentralization

  • training as running PPFL experiments on multiple (heterogeneous) machines with actual decentralization of client datasets

Hence, we describe two types of PPFL run:

Simulating PPFL is useful for those who develop, test, and validate new models and algorithms for PPFL, whereas Training PPFL for those who consider actual PPFL settings in practice.

Sample template#

Note

Before reading this section, it is highly recommended to check out Tutorials for more detailed examples in notebooks.

For either simulation or training, a skeleton of the script for running PPFL can be written as follows:

 1from appfl import *
 2from appfl.config import Config
 3from omegaconf import OmegaConf
 4
 5def main():
 6
 7    # load default configuration
 8    cfg = OmegaConf.structured(Config)
 9
10    # change configuration if needed
11    ...
12
13    # define model, loss, and data
14    model = ... # user-defined model
15    loss_fn = ... # user-defined loss function
16    data = ... # user-defined datasets
17
18    # The choice of PPFL runs
19
20if __name__ == "__main__":
21    main()

Some remarks are made as follows: