Download PDFOpen PDF in browser

Resource-Aware Heterogeneous Federated Learning with Specialized Local Models

EasyChair Preprint no. 13376

15 pagesDate: May 20, 2024


Federated Learning (FL) is extensively used to train AI/ML models in distributed and privacy-preserving settings. Participant edge devices in FL systems typically contain non-independent and identically distributed~(Non-IID) private data and unevenly distributed computational resources. Preserving user data privacy while optimizing AI/ML models in a heterogeneous federated network requires us to address data and system/resource heterogeneity. To address these challenges, we propose \underline{R}esource-\underline{a}ware \underline{F}ederated \underline{L}earning~(\proj). \proj allocates resource-aware specialized models to edge devices using Neural Architecture Search~(NAS) and allows heterogeneous model architecture deployment by knowledge extraction and fusion. Combining NAS and FL enables on-demand customized model deployment for resource-diverse edge devices. Furthermore, we propose a multi-model architecture fusion scheme allowing the aggregation of the distributed learning results. Results demonstrate \proj's superior resource efficiency compared to SoTA.

Keyphrases: Edge Computing, Federated Learning, Neural Architecture Search

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Sixing Yu and Pablo Munoz and Ali Jannesari},
  title = {Resource-Aware Heterogeneous Federated Learning with Specialized Local Models},
  howpublished = {EasyChair Preprint no. 13376},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser