How to set configuration

All runs use OmegaConf, a hierarchical configuration system.

Load default configuration

Every run requires to set the configuration defined in DictConfig format. We can easily load the default configuration setting by

[1]:
from appfl.config import *
cfg: DictConfig = OmegaConf.structured(Config)

The configuration cfg is initialized with the default values. Let’s check the configuration values.

[2]:
print(OmegaConf.to_yaml(cfg))
fed:
  type: federated
  servername: ServerFedAvg
  clientname: ClientOptim
  args:
    server_learning_rate: 0.01
    server_adapt_param: 0.001
    server_momentum_param_1: 0.9
    server_momentum_param_2: 0.99
    optim: SGD
    num_local_epochs: 10
    optim_args:
      lr: 0.001
    use_dp: false
    epsilon: 1
    clip_grad: false
    clip_value: 1
    clip_norm: 1
device: cpu
device_server: cpu
num_clients: 1
num_epochs: 2
num_workers: 0
batch_training: true
train_data_batch_size: 64
train_data_shuffle: true
validation: true
test_data_batch_size: 64
test_data_shuffle: false
data_sanity: false
reproduce: true
pca_dir: ''
params_start: 0
params_end: 49
ncomponents: 40
use_tensorboard: false
load_model: false
load_model_dirname: ''
load_model_filename: ''
save_model: false
save_model_dirname: ''
save_model_filename: ''
checkpoints_interval: 2
save_model_state_dict: false
output_dirname: output
output_filename: result
logginginfo: {}
summary_file: ''
personalization: false
p_layers: []
config_name: ''
max_message_size: 104857600
operator:
  id: 1
server:
  id: 1
  host: localhost
  port: 50051
  use_tls: false
  api_key: null
client:
  id: 1

Most variables are self-explanatory.

  • Variable fed sets the choice of algorithms, each of which is also defined as a dataclass. The classes should be accessible by appfl.config.fed.*.

  • gRPC configurations:

    • max_message_size: the maximum size of data to be sent or received in a single RPC call, default 10 MB. If the size of weights for a single neuron is larger than 10 MB, you need to increase this value.

    • host: the URL of a server

    • port: the port number of a server

Initialize configuration with arguments

We can also initialize the configuration with other values. For example, the following code is loading the configuration with the algorithm choice of IIADMM.

[3]:
cfg: DictConfig = OmegaConf.structured(Config(
    fed = fed.iiadmm.IIADMM()
))
print(OmegaConf.to_yaml(cfg.fed))
type: iiadmm
servername: IIADMMServer
clientname: IIADMMClient
args:
  num_local_epochs: 1
  accum_grad: true
  coeff_grad: false
  optim: SGD
  optim_args:
    lr: 0.01
  init_penalty: 100.0
  residual_balancing:
    res_on: false
    res_on_every_update: false
    tau: 1.1
    mu: 10
  use_dp: false
  epsilon: 1
  clip_grad: false
  clip_value: 1
  clip_norm: 1

Change configuration values

We can also change the configuration value after initialization. For example, we can change fed variable as follows:

[4]:
cfg: DictConfig = OmegaConf.structured(Config)
my_fed: DictConfig = OmegaConf.structured(fed.fedasync.FedAsync)
cfg.fed = my_fed
print(OmegaConf.to_yaml(cfg.fed))
type: fedasync
servername: ServerFedAsynchronous
clientname: ClientOptim
args:
  server_learning_rate: 0.01
  server_adapt_param: 0.001
  server_momentum_param_1: 0.9
  server_momentum_param_2: 0.99
  optim: SGD
  num_local_epochs: 10
  optim_args:
    lr: 0.001
  use_dp: false
  epsilon: 1
  clip_grad: false
  clip_value: 1
  clip_norm: 1
  K: 3
  alpha: 0.9
  staleness_func:
    name: constant
    args:
      a: 0.5
      b: 4
  gradient_based: false