Skip to main content
Log in

Training time minimization for federated edge learning with optimized gradient quantization and bandwidth allocation

基于联邦边缘学习的梯度量化和带宽分配优化策略

  • Review Article
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

Training a machine learning model with federated edge learning (FEEL) is typically time consuming due to the constrained computation power of edge devices and the limited wireless resources in edge networks. In this study, the training time minimization problem is investigated in a quantized FEEL system, where heterogeneous edge devices send quantized gradients to the edge server via orthogonal channels. In particular, a stochastic quantization scheme is adopted for compression of uploaded gradients, which can reduce the burden of per-round communication but may come at the cost of increasing the number of communication rounds. The training time is modeled by taking into account the communication time, computation time, and the number of communication rounds. Based on the proposed training time model, the intrinsic trade-off between the number of communication rounds and per-round latency is characterized. Specifically, we analyze the convergence behavior of the quantized FEEL in terms of the optimality gap. Furthermore, a joint data-and-model-driven fitting method is proposed to obtain the exact optimality gap, based on which the closed-form expressions for the number of communication rounds and the total training time are obtained. Constrained by the total bandwidth, the training time minimization problem is formulated as a joint quantization level and bandwidth allocation optimization problem. To this end, an algorithm based on alternating optimization is proposed, which alternatively solves the subproblem of quantization optimization through successive convex approximation and the subproblem of bandwidth allocation by bisection search. With different learning tasks and models, the validation of our analysis and the near-optimal performance of the proposed optimization algorithm are demonstrated by the simulation results.

摘要

由于边缘设备有限算力和边缘网络有限的无线资源, 利用联邦边缘学习 (federated edge learning, FEEL) 训练机器学习模型通常非常耗时. 本文研究了量化FEEL系统中训练时间最小化问题, 其中异构边缘设备通过正交信道向边缘服务器发送量化后的梯度. 采用随机量化对上传的梯度进行压缩, 可减少每轮通信的开销, 但可能会增加通信轮数. 综合考虑通信时间、 计算时间和通信轮数对训练时间进行建模. 基于所提出的训练时间模型, 描述了通信轮数和每轮延迟之间的内在权衡. 具体地, 分析了量化FEEL的收敛性. 提出一种基于数据模型双驱动的拟合方法以得到精确的最优间隔, 并在此基础上得到通信轮数和总训练时间的闭式表达式. 在总带宽限制下, 将训练时间最小化问题建模为量化级数和带宽分配的优化问题. 本文通过交替求解量化优化子问题 (通过连续凸近似方法求解) 和带宽分配子问题 (通过二分查找方法求解) 解决这个问题. 在不同学习任务和模型下, 仿真结果证明了本文分析的有效性和所提优化算法性能接近最优.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Contributions

Peixi LIU, Jiamo JIANG, Guangxu ZHU, Wei JIANG, and Wu LUO designed the research. Guangxu ZHU, Wei JIANG, and Wu LUO supervised the research. Peixi LIU and Guangxu ZHU implemented the simulations. Peixi LIU drafted the paper. Jiamo JIANG and Guangxu ZHU helped organize the paper. Lei CHENG, Ying DU, and Zhiqin WANG revised and finalized the paper.

Corresponding authors

Correspondence to Jiamo Jiang  (江甲沫) or Guangxu Zhu  (朱光旭).

Ethics declarations

Peixi LIU, Jiamo JIANG, Guangxu ZHU, Lei CHENG, Wei JIANG, Wu LUO, Ying DU, and Zhiqin WANG declare that they have no conflict of interest.

Additional information

Project supported by the National Key R&D Program of China (No. 2020YFB1807100), the National Natural Science Foundation of China (No. 62001310), and the Guangdong Basic and Applied Basic Research Foundation, China (No. 2022A1515010109)

List of supplementary materials

Proof S1 Proof of Theorem 1

Proof S2 Proof of Lemma 1

Table S1 Simulation parameters

Fig. S1 Optimality gap and test accuracy in simulation 1

Fig. S2 Optimality gap and test accuracy in simulation 2

Supplementary materials for

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, P., Jiang, J., Zhu, G. et al. Training time minimization for federated edge learning with optimized gradient quantization and bandwidth allocation. Front Inform Technol Electron Eng 23, 1247–1263 (2022). https://doi.org/10.1631/FITEE.2100538

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2100538

Key words

关键词

CLC number

Navigation