Abstract
Federated Learning and Generative Adversarial Networks (FL-GANs) are becoming increasingly popular in solving practical applications, and their collaboration is even more efficient. However, non-independent and identically distributed (non-IID) training data could make model convergence difficult and training unstable under FL, and the client-drift challenge due to non-IID training data can also adversely affect the training of GAN. To address these challenges, we propose an adaptive FL framework, AFL-GAN, which aims to optimize client selection and local training epochs simultaneously, so as to implement high-performance and stable GANs in practical wireless environments. Specifically, we first give a toy example to explain the necessity of optimizing client selection and local training epochs in FL-GANs, for the two components of the GAN, we set up the training process without exposing the discriminator but sharing the generator to reduce communication overhead. Then, we formulate the minimization problem of AFL-GAN model loss under a given resource budget, and analyze the effect of client selection and local training epoch on the training performance of FL-GANs. Next, guided by the toy example and theoretical analysis, to solve the non-IID and client-drift challenge caused by non-IID, we employ the maximum mean discrepancy (MMD) score to evaluate the contribution weight of each local model, and leverage the deep reinforcement learning (DRL) to adaptively achieve the optimizing of client selection and local training epochs. Finally, experimental results show that our proposed framework can improve the learning performance of FL-GANs training while saving computation and communication resources, and have good performance in resource-constrained situations.
Similar content being viewed by others
References
Abdi, H., Williams, L.J.: Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2(4), 433–459 (2010)
Amalan, A., Wang, R., Qiao, Y., Panaousis, E., Liang, K.: Multi-flgans: multi-distributed adversarial networks for non-IID distribution. arXiv:2206.12178 (2022)
Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. Syst. Man Cybern. 5, 834–846 (1983)
Boyd, S., Ghosh, A., Prabhakar, B., Shah, D.: Gossip algorithms: Design, analysis and applications. In: Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies., vol. 3, pp. 1653–1664. IEEE (2005)
Ekblom, E., Zec, E.L., Mogren, O.: Effgan: Ensembles of fine-tuned federated gans. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 884–892 (2022). https://doi.org/10.1109/BigData55660.2022.10020158
Ghonima, R.: Implementation of gans using federated learning. In: 2021 Tenth International Conference on Intelligent Computing and Information Systems (ICICIS), pp. 142–148. IEEE (2021)
Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13(1), 723–773 (2012)
Guerraoui, R., Guirguis, A., Kermarrec, A.-M., Merrer, E.L.: Fegan: Scaling distributed gans. In: Proceedings of the 21st International Middleware Conference, pp. 193–206 (2020)
Hardy, C., Le Merrer, E., Sericola, B.: Md-gan: Multi-discriminator generative adversarial networks for distributed datasets. In: 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 866–877. IEEE (2019)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv. Neural Inform. Process. Syst. 30, 6626–6637 (2017)
Hou, Y., Liu, L., Wei, Q., Xu, X., Chen, C.: A novel ddpg method with prioritized experience replay. In: 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 316–321. IEEE (2017)
Iqbal, T., Ali, H.: Generative adversarial network for medical images (mi-gan). J. Med. Syst. 42, 1–11 (2018)
Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: Scaffold: Stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143. PMLR (2020)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv:1412.6980 (2014)
LeCun, Y.: The mnist database of handwritten digits. (1998). http://yann.lecun.com/exdb/mnist/
Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)
Li, H., Liu, H., Ji, X., Li, G., Shi, L.: Cifar10-dvs: an event-stream dataset for object classification. Front. Neurosci. 11, 309 (2017)
Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020)
Li, L., Fan, Y., Tse, M., Lin, K.-Y.: A review of applications in federated learning. Comput. Indus. Eng. 149, 106854 (2020)
Li, W., Chen, J., Wang, Z., Shen, Z., Ma, C., Cui, X.: Ifl-gan: Improved federated learning generative adversarial network with maximum mean discrepancy model aggregation. IEEE Trans. Neural Netw. Learn. Syst. (2022). https://doi.org/10.1109/TNNLS.2022.3167482
Liu, Y., Peng, J., James, J., Wu, Y.: Ppgan: Privacy-preserving generative adversarial network. In: 2019 IEEE 25Th International Conference on Parallel and Distributed Systems (ICPADS), pp. 985–989. IEEE(2019)
Liu, J., Xu, H., Wang, L., Xu, Y., Qian, C., Huang, J., Huang, H.: Adaptive asynchronous federated learning in resource-constrained edge computing. IEEE Trans. Mob. Comput. 22(2), 674–690 (2023)
Liu, L., Zhang, J., Song, S., Letaief, K.B.: Client-edge-cloud hierarchical federated learning. In: ICC 2020-2020 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2020)
Ma, Z., Zhao, M., Cai, X., Jia, Z.: Fast-convergent federated learning with class-weighted aggregation. J. Syst. Architect. 117, 102125 (2021)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
Mugunthan, V., Gokul, V., Kagal, L., Dubnov, S.: Bias-free fedgan: A federated approach to generate bias-free datasets. arXiv:2103.09876 (2021)
Osband, I., Blundell, C., Pritzel, A., Van Roy, B.: Deep exploration via bootstrapped DQN. Adv. Neural Inform. Process. Syst. 29, 4026–4034 (2016)
Qiao, D., Liu, G., Guo, S., He, J.: Adaptive federated learning for non-convex optimization problems in edge computing environment. IEEE Trans. Netw. Sci. Eng. 9(5), 3478–3491 (2022). https://doi.org/10.1109/TNSE.2022.3185116
Qiao, D., Guo, S., Liu, D., Long, S., Zhou, P., Li, Z.: Adaptive federated deep reinforcement learning for proactive content caching in edge computing. IEEE Trans. Parallel Distrib. Syst. 33(12), 4767–4782 (2022). https://doi.org/10.1109/TPDS.2022.3201983
Qu, H., Zhang, Y., Chang, Q., Yan, Z., Chen, C., Metaxas, D.: Learn distributed gan with temporary discriminators. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVII 16, pp. 175–192. Springer, Cham (2020)
Radulov, N.: Artificial intelligence and security. security 4.0. Secur. Future 3(1), 3–5 (2019)
Rasouli, M., Sun, T., Rajagopal, R.: Fedgan: federated generative adversarial networks for distributed data. arXiv:2006.07228 (2020)
Scholkopf, B., Sung, K.-K., Burges, C.J., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V.: Comparing support vector machines with gaussian kernels to radial basis function classifiers. IEEE Trans. Signal Process. 45(11), 2758–2765 (1997)
Shi, F., Hu, C., Lin, W., Fan, L., Huang, T., Wu, W.: Vfedcs: optimizing client selection for volatile federated learning. IEEE Internet Things J. 9(24), 24995–25010 (2022)
Yi, X., Walia, E., Babyn, P.: Generative adversarial network in medical imaging: a review. Med. Image Anal. 58, 101552 (2019)
Zhang, Y., Qu, H., Chang, Q., Liu, H., Metaxas, D., Chen, C.: Training federated gans with theoretical guarantees: a universal aggregation approach. arXiv:2102.04655 (2021)
Zhang, C., Xie, Y., Bai, H., Yu, B., Li, W., Gao, Y.: A survey on federated learning. Knowl.-Based Syst. 216, 106775 (2021)
Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)
Zhu, H., Xu, J., Liu, S., Jin, Y.: Federated learning on non-IID data: a survey. Neurocomputing 465, 371–390 (2021)
Acknowledgements
This work was supported partly by the National Natural Science Foundation of China (No. 62272069, 62203077), and the Natural Science Foundation of Chongqing under Grant CSTB2022NSCQ-MSX1029 and Special Research Funding for Chongqing Postdoctoral Researchers under Grant 2021XM2015
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Quan, Y., Guo, S., Qiao, D. et al. Afl-gan: adaptive federated learning for generative adversarial network with resource constraints. CCF Trans. Pervasive Comp. Interact. 6, 1–17 (2024). https://doi.org/10.1007/s42486-023-00141-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42486-023-00141-w