Abstract
Edge and fog computing technologies are akin to cloud computing but operate in closer proximity to users, offering similar services on a more widely distributed and localized scale. To enhance the computing environment and enable efficient offloading of computing requests, we propose a unified federation of these technologies, forming a federated cloud-edge-fog (CEF) system. Unlike current offloading models limited to single-hop and unidirectional vertical scenarios, our model facilitates two-hop, bidirectional (horizontal and vertical) offloading. The CEF model enables not only fog and edge devices to offload tasks to the cloud but also allows the cloud to offload tasks to the edges and fogs, creating a more dynamic and flexible computing ecosystem. To optimize this system, we formulate an optimization problem focused on minimizing the total cost while adhering to latency constraints. We employ simulated annealing as the solution approach. By adopting the proposed CEF model and optimization strategy, organizations can effectively leverage the strengths of cloud, edge, and fog computing while achieving significant cost reductions and improved task offloading efficiency. The findings from our study indicate that adopting a two-hop offloading approach can result in cost savings of 10–20% compared to the traditional one-hop method. Furthermore, when incorporating horizontal and bidirectional offloading, cost savings of approximately 12% and 20% can be achieved, respectively, in contrast to scenarios without horizontal offloading and only unidirectional vertical offloading. This advancement holds promise for optimizing computing resources and enhancing the overall performance of distributed systems in real-world applications.
Similar content being viewed by others
References
Shi, W., & Dustdar, S. (2016). The promise of edge computing. Computer, 49(5), 78–81.
Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge computing: Vision and challenges. IEEE Internet of Things Journal, 3, 637–646.
Bonomi, F., Milito, R., Zhu, J., & Addepalli, S. (2012). Fog computing and its role in the internet of things. In MCC ’12 (pp. 9600–9609).
Vaquero, L. M., & Rodero-Merino, L. (2014). Finding your way in the fog: Towards a comprehensive definition of fog computing. ACM SIGCOMM Computer Communication Review, 44(5), 27–32.
Liu, L., Chang, Z., Guo, X., Mao, S., & Ristaniemi, T. (2017). Multiobjective optimization for computation offloading in fog computing. IEEE Internet of Things Journal, 5(1), 283–294.
Dustdar, S., Avasalcai, C., & Murturi, I. (2019). Invited paper: Edge and fog computing: Vision and research challenges. In 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE) (pp. 9600–9609).
Assis, M. R. M., & Bittencourt, L. F. (2016). A survey on cloud federation architectures: Identifying functional and non-functional properties. Journal of Network and Computer Applications, 72, 51–71.
Aijaz, A., Aghvami, H., & Amani, M. (2013). A survey on mobile data offloading: Technical and business perspectives. IEEE Wireless Communications, 20(2), 104–112.
Akutsu, K., Phung-Duc, T., Lai, Y.-C., & Lin, Y.-D. (2022). Analyzing vertical and horizontal offloading in federated cloud and edge computing systems. Telecommunication Systems, 79(3), 447–459.
Chen, X., Jiao, L., Li, W., & Fu, X. (2015). Efficient multi-user computation offloading for mobile-edge cloud computing. IEEE/ACM Transactions on Networking, 24(5), 2795–2008.
Zhang, H., Xiao, Y., Bu, S., Niyato, D., Yu, R., & Han, Z. (2016). Fog computing in multi-tier data center networks: A hierarchical game approach. In 2016 IEEE international conference on communications (ICC) (pp. 1–6).
Lin, Y., Hu, J., Kar, B., & Yen, L. (2019). Cost minimization with offloading to vehicles in two-tier federated edge and vehicular-fog systems. In 2019 IEEE 90th vehicular technology conference (VTC2019-Fall) (pp. 1–6).
Lin, Y., Lai, Y., Huang, J., & Chien, H. (2018). Three-tier capacity and traffic allocation for core, edges, and devices for mobile edge computing. IEEE Transactions on Network and Service Management, 15(3), 923–933.
Wang, N., Varghese, B., Matthaiou, M., & Nikolopoulos, D. S. (2017). Enorm: A framework for edge node resource management. IEEE Transactions on Services Computing, 13(6), 1086–1099.
Kar, B., Yahya, W., Lin, Y.-D., & Ali, A. (2023). Offloading using traditional optimization and machine learning in federated cloud-edge-fog systems: A survey. IEEE Communications Surveys & Tutorials.
Zahid, N., Alkhayyat, A., Ismail, M., & Sodhro, A. H. (2022). An effective traffic management approach for decentralized bsns. In 2022 IEEE 96th vehicular technology conference (VTC2022-Fall) (pp. 1–5). IEEE.
Lakhan, A., Sodhro, A. H., Majumdar, A., Khuwuthyakorn, P., & Thinnukool, O. (2022). A lightweight secure adaptive approach for internet-of-medical-things healthcare applications in edge-cloud-based networks. Sensors, 22(6), 2379.
Chekired, D. A., Khoukhi, L., & Mouftah, H. T. (2018). Industrial iot data scheduling based on hierarchical fog computing: A key for enabling smart factory. IEEE Transactions on Industrial Informatics, 14(10), 4590–4602.
Thai, M., Lin, Y., Lai, Y., & Chien, H. (2019). Workload and capacity optimization for cloud-edge computing systems with vertical and horizontal offloading. IEEE Transactions on Network and Service Management, 17(1), 227–238.
Deng, S., Zhang, C., Li, C., Yin, J., Dustdar, S., & Zomaya, A. Y. (2021). Burst load evacuation based on dispatching and scheduling in distributed edge networks. IEEE Transactions on Parallel and Distributed Systems, 32(8), 1918–1932.
Aburukba, R. O., Landolsi, T., & Omer, D. (2021). A heuristic scheduling approach for fog-cloud computing environment with stationary iot devices. Journal of Network and Computer Applications, 180, 102994.
Kar, B., Lin, Y., & Lai, Y. (2020). Omni: Omni-directional dual cost optimization of two-tier federated cloud-edge systems. In 2020 IEEE International Conference on Communications (ICC) (pp. 1–7).
Kar, B., Lin, Y.-D., & Lai, Y.-C. (2023). Cost optimization of omnidirectional offloading in two-tier cloud-edge federated systems. Journal of Network and Computer Applications, 215, 103630.
Cao, X., Tang, G., Guo, D., Li, Y., & Zhang, W. (2020). Edge federation: Towards an integrated service provisioning model. IEEE/ACM Transactions on Networking, 28(3), 1116–1129.
Ascigil, O., Tasiopoulos, A., Phan, T. K., Sourlas, V., Psaras, I., & Pavlou, G. (2021). Resource provisioning and allocation in function-as-a-service edge-clouds. IEEE Transactions on Services Computing.
Dong, Y., Xu, G., Zhang, M., & Meng, X. (2021). A high-efficient joint ‘cloud-edge’ aware strategy for task deployment and load balancing. IEEE Access, 9, 12791–12802.
Tong, L., Li, Y., & Gao, W. (2019). A hierarchical edge cloud architecture for mobile computing. In 35th IEEE international conference on computer communications (pp. 1–9).
Faticanti, F., Savi, M., Pellegrini, F. D., Kochovski, P., Stankovski, V., & Siracusa, D. (2020). Deployment of application microservices in multi-domain federated fog environments. In 2020 international conference on omni-layer intelligent systems (COINS) (pp. 1–6).
Razaq, M. M., Tak, B., Peng, L., & Guizani, M. (2021). Privacy-aware collaborative task offloading in fog computing. IEEE Transactions on Computational Social Systems.
Sharmin, Z., Malik, A. W., Rahman, A. U., & Noor, R. D. (2020). Toward sustainable micro-level fog-federated load sharing in internet of vehicles. IEEE Internet of Things Journal, 7(4), 3614–3622.
Yen, L., Hu, J., Lin, Y., & Kar, B. (2020). Decentralized configuration protocols for low-cost offloading from multiple edges to multiple vehicular fogs. IEEE Transactions on Vehicular Technology, 70(1), 872–885.
Mourad, A., Tout, H., Wahab, O. A., Otrok, H., & Dbouk, T. (2020). Ad hoc vehicular fog enabling cooperative low-latency intrusion detection. IEEE Internet of Things Journal, 8(2), 829–843.
Kar, B., Shieh, K., Lai, Y., Lin, Y., & Ferng, H. (2021). Qos violation probability minimization in federating vehicular-fogs with cloud and edge systems. IEEE Transactions on Vehicular Technology, 70(12), 13270–13280.
Angus, I. (2001). An introduction to erlang b and erlang c. Telemanagement, 187, 6–8.
Haddock, J., & Mittenthal, J. (1992). Simulation optimization using simulated annealing. Computers & industrial engineering, 22(4), 387–395.
Kar, B., Yahya, W., Lin, Y.-D., & Ali, A. (2022). A survey on offloading in federated cloud-edge-fog systems with traditional optimization and machine learning. arXiv:2202.10628
Ahmad, I., Shahabuddin, S., Malik, H., Harjula, E., Leppänen, T., Loven, L., Anttonen, A., Sodhro, A. H., Alam, M. M., Juntti, M., & Ylä-Jääski, A. (2020). Machine learning meets communication networks: Current trends and future challenges. IEEE Access, 8, 223418–223460.
Funding
This work was supported by the National Science and Technology Council (NSTC), Taiwan, under Grant 109-2221-E-011-104-MY3.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the research. B-SL, BK, and C-YC wrote the main manuscript text. B-SL prepared all the figures. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A: Two-hop offloading
Appendix A: Two-hop offloading
1.1 A.1: Communication latency
Two-hop offloading can be divided into first-hop and second-hop. For example, in an offloading scenario from \(f_{j,i}\) through \(e_{j}\) to C; first-hop is from \(f_{j,i}\) to \(e_{j}\), and second-hop is from \(e_{j}\) to C. Since two-hop offloading is very similar to one-hop offloading, all offloading cases of one-hop are also available for two-hop offloading, along with five extra offloading options: (1) from \(f_{j,i}\) through \(e_{j}\) to C, (2) from \(f_{j,i}\) through \(e_{j}\) to \(e_{j^\prime }\), (3) from \(f_{j,i}\) through \(e_{j}\) to \(f_{j,i^\prime }\), (4) from \(e_{j}\) through \(e_{j^\prime }\) to \(f_{j^\prime ,i}\), and (5) from C through \(e_{j}\) to \(f_{j,i}\). Since the estimation of first-hop communication latency is the same as one-hop offloading, the calculation of second-hop communication latency is discussed here. Let \(D_{C}\) be the second-hop of communication latency from \(f_{j,i}\) to C, which can be represented as
where \(\lambda _{j,i}\beta _{j,i}\beta ^\prime _{j}\) is the request rate offload from \(f_{j,i}\) to C via \(e_j\). Let \(D_{j^\prime }\) be the second-hop of communication latency from \(f_{j,i}\) to \(e_{j^\prime }\), which can be represented as
where \(\lambda _{j,i}\beta _{j,i}\beta ^\star _{j,j^\prime }\) is the request rate offload from \(f_{j,i}\) to \(e_{j^\prime }\) via \(e_j\). Let \(D_{j,i^\prime }\) be the second-hop of communication latency from \(f_{j,i}\) to \(f_{j,i^\prime }\), which can be represented as
where \(\lambda _{j,i}\beta _{j,i}\beta ^\prime _{j,i^\prime }\) is the request rate offload from \(f_{j,i}\) to \(f_{j,i^\prime }\) via \(e_j\). Let \(D^\prime _{j^\prime ,i}\) be the second-hop of communication latency from \(e_{j}\) to \(f_{j^\prime ,i}\), which can be represented as
where \(\lambda '_{j} \beta ^\star _{j,j'} \beta '_{j',i}\) is the request rate offload from \(e_{j}\) to \(f_{j^\prime ,i}\) via \(e_j\). Let \(D''_{j,i}\) be the second-hop of communication latency from C to \(f_{j,i}\), which can be represented as
where \(\lambda '' \beta ''_{j} \beta '_{j,i}\) is the request rate offload from C to \(f_{j,i}\) via \(e_j\).
1.2 A.2: Computation latency
The computation latency of two-hop offloading is also based on the M/M/c queuing model, and the methods of \(l_{j,i}\), \(l^\prime _{j}\), and \(l^\prime {}^\prime \) are same as (9), (10), and (11), respectively. However, since we are considering the second-hop, the calculation of \(R_{j,i}\), \(R^\prime _{j}\), and \(R^\prime {}^\prime \) are not same as given in (6), (7), and (8). \(R_{j,i}\) in two-hop offloading can be represented as
where \(\lambda _{j,i^\prime }\beta _{j,i^\prime }\beta ^\prime _{j,i}\) is the request rate offloaded from \(f_{j, i^\prime }\) to \(f_{j,i}\), \(\lambda ^\prime _{j^\prime }\beta ^\star _{j^\prime ,j}\beta ^\prime _{j,i}\) is the request rate offloaded from \(e_{j^\prime }\) to \(f_{j,i}\), and \(\lambda ^\prime {}^\prime \beta ''_{j} \beta ^\prime _{j,i}\) is the request rate offloaded from C to \(f_{j,i}\). The \(R^\prime _{j}\) in two-hop offloading can be represented as
where \(\lambda _{j^\prime ,i}\beta _{j^\prime ,i}\beta ^\star _{j^\prime ,j}\) is the request rate offloaded from \(f_{j^\prime ,i}\) to \(e_j\). \(\lambda ^\prime {}^\prime \left( 1 - \begin{matrix} \sum \nolimits _{i = 1}^{n_j}\beta ^\prime _{j,i} \end{matrix}\right) \beta ^\prime {}^\prime _j\), \(\lambda _{j,i}(1 - \beta ^\prime _j - \sum \nolimits _{i^\prime = 1, i^\prime \ne i}^{n_j}\beta ^\prime _{j,i^\prime } - \sum \nolimits _{j^\prime = 1, j^\prime \ne j}^{m}\beta ^\star _{j,j^\prime })\beta _{j,i}\), and \(\lambda ^\prime _{j^\prime }\left( 1 - \sum \nolimits _{i = 1}^{n_j}\beta ^\prime _{j,i}\right) \beta ^\star _{j^\prime ,j}\) are request rate received by \(e_j\) from C, \(f_{j,i}\), and \(e_{j^\prime }\), respectively.
The \(R^\prime {}^\prime \) in two-hop offloading can be represented as
where \(\lambda _{j,i}\beta _{j,i}\beta ^\prime _{j}\) is the request rate offloaded from \(f_{j,i}\) to C.
1.3 A.3: Communication cost
Here, we calculated the communication cost between tiers. Since two-hop offloading has an extra second-hop offloading, its estimation is slightly different from the one-hop offloading. The \(S^\prime _{C,E}\) in two-hop offloading can be represented as
where \(\lambda _{j,i}\beta _{j,i}\beta ^\prime _{j}\) is the second-hop of the offloading from \(f_{j,i}\) to C. The \(S^\prime _{E,E^\prime }\) in two-hop offloading can be represented as
where \(\lambda _{j,i}\beta _{j,i}\beta ^\star _{j,j^\prime }\) is the second-hop of the offloading from \(f_{j,i}\) to \(e_{j^\prime }\). The \(S_{E,F}\) in two-hop offloading can be represented as
where \(\lambda _{j,i}\beta _{j,i}\beta ^\prime _{j,i^\prime }\) is the second-hop of the offloading from \(f_{j,i}\) to \(f_{j,i^\prime }\), \(\lambda ^\prime _{j}\beta ^\star _{j,j^\prime }\beta ^\prime _{j^\prime ,i}\) is the second-hop of the offloading from \(e_j\) to \(f_{j^\prime ,i}\), and \(\lambda ^\prime {}^\prime \beta ''_{j}\beta ^\prime _{j,i}\) is the second-hop of the offloading from C to \(f_{j,i}\).
1.4 A.4: Computation cost
The evaluation of computing cost in two-hop offloading is the same as one-hop offloading as shown in (15) for the cloud tier, (16) for the edge tier, and (17) for the fog tier.
1.5 A.5: Objective and constraints
The objective function in two-hop offloading is the same as one-hop offloading, as shown in (18). The constraint of two-hop offloading will be more complicated because of the second-hop compared to the one-hop offloading, and the objective function must meet the (22)–(27), along with the following constraints.
The constraints in (A12), (A13), and (A14) ensure that the total communication latency plus the total computation latency of the cloud, edge, and fog in the case of two-hop offloading do not exceed the maximum latency limit.
An appendix contains supplementary information that is not an essential part of the text itself but which may be helpful in providing a more comprehensive understanding of the research problem or it is information that is too cumbersome to be included in the body of the paper.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lin, BS., Kar, B., Chin, TL. et al. Cost optimization of cloud-edge-fog federated systems with bidirectional offloading: one-hop versus two-hop. Telecommun Syst 84, 487–505 (2023). https://doi.org/10.1007/s11235-023-01061-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11235-023-01061-x