SherlockFog: a new tool to support application analysis in Fog and Edge computing

  • Maximiliano Geier
  • David González Márquez
  • Esteban MocskosEmail author


The Fog and Edge Computing paradigms have emerged as a solution to limitations of the Cloud Computing model to serve a huge amount of connected devices efficiently. These devices have unused computing power that could be exploited to execute parallel applications. A large number of existing and new parallel applications are programmed using Message Passing Interface, which is a de facto standard in High Performance Computing environments. We focus on the following question: Can MPI-based application take advantage of the increasing number of distributed resources available through Fog/Edge Computing Paradigm? In this work we present an extension to SherlockFog, a tool to experiment with parallel applications in Fog and Edge Computing environments to explore the impact of heterogeneity in computing power. This new version of our tool makes use of the Intel Pin Tool to inject instructions parametrically in the target code, mimicking different CPU computer power. A validation is presented using the MPI version of the MG and CG NAS Parallel Benchmarks to evaluate this estimation when used with SherlockFog to emulate Fog/Edge scenarios. We analyze the impact of slower nodes on two benchmarks and show that the incidence of a single slower node is significant, but slowing more nodes down does not further degrade performance. The latency effect is also analyzed, but its impact depends on the communication pattern of the target code. We show that SherlockFog provides a framework to analyze behavior of MPI libraries and applications towards achieving a Fog/Edge-ready distributed computing environment.


Distributed systems Fog and Edge computing IoT Parallel applications Benchmarks 



The authors would like to thank the Centro de Simulación Computacional para Aplicaciones Científicas/CSC-CONICET and the Centro de Cómputos de Alto Rendimiento (CeCAR) for providing the equipment we have used in the experimental setups throughout this work. This work is supported by Universidad de Buenos Aires (UBACyT 20020130200096BA), Consejo Nacional de Investigaciones Científicas y Técnicas (PIO13320150100020CO), and Agencia Nacional de Promoción de Ciencia y Técnica (PICT-2015-2761 and PICT-2015-0370).


  1. 1.
    Bonomi, F., Milito, R., Zhu, J., Addepalli, S.: Fog computing and its role in the internet of things. In: Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, MCC ’12, pp. 13–16. ACM, New York, NY, USA (2012).
  2. 2.
    Herbst, N.R., Kounev, S., Reussner, R.: Elasticity in cloud computing: What it is, and what it is not. In: Proceedings of the 10th International Conference on Autonomic Computing (ICAC 13), pp. 23–27. USENIX, San Jose, CA (2013).
  3. 3.
    Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016). CrossRefGoogle Scholar
  4. 4.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface, 2nd edn. MIT Press, Cambridge (1999)CrossRefzbMATHGoogle Scholar
  5. 5.
    Gropp, W., Lusk, E., Thakur, R.: Using MPI-2: Advanced Features of the Message-Passing Interface, 2nd edn. MIT Press, Cambridge (1999)CrossRefGoogle Scholar
  6. 6.
    Geier, M., Mocskos, E.: Sherlockfog: Finding opportunities for mpi applications in fog and edge computing. In: Mocskos, E., Nesmachnow, S. (eds.) High Performance Computing, pp. 185–199. Springer International Publishing, Cham (2018)CrossRefGoogle Scholar
  7. 7.
    Brandfass, B., Alrutz, T., Gerhold, T.: Rank reordering for mpi communication optimization. Comput. Fluids 80, 372–380 (2013). CrossRefGoogle Scholar
  8. 8.
    Dichev, K., Rychkov, V., Lastovetsky, A.: Two algorithms of irregular scatter/gather operations for heterogeneous platforms. In: Proceedings of the 17th European MPI Users’ Group Meeting Conference on Recent Advances in the Message Passing Interface, EuroMPI’10, pp. 289–293. Springer, Berlin (2010).
  9. 9.
    Mercier, G., Clet-Ortega, J.: Towards an efficient process placement policy for mpi applications in multicore environments. In: Proceedings of the 16th European PVM/MPI Users’ Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 104–115. Springer-Verlag, Berlin, Heidelberg (2009).
  10. 10.
    Navaridas, J., Pascual, J.A., Miguel-Alonso, J.: Effects of job and task placement on parallel scientific applications performance. In: Proceedings of the 2009 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing, pp. 55–61 (2009).
  11. 11.
    ns-3 Overview. Accessed 4 May 2019
  12. 12.
    ns-3 Direct Code Execution. Accessed 4 May 2019
  13. 13.
    Casanova, H., Giersch, A., Legrand, A., Quinson, M., Suter, F.: Versatile, scalable, and accurate simulation of distributed applications and platforms. J. Parallel Distrib. Comput. 74(10), 2899–2917 (2014)CrossRefGoogle Scholar
  14. 14.
    Degomme, A., Legrand, A., Markomanolis, G., Quinson, M., Stillwell, M., Suter, F.: Simulating mpi applications: the smpi approach. IEEE Trans. Parallel Distrib. Syst. 99, 1–1 (2017). Google Scholar
  15. 15.
    Dimemas. Accessed 4 May 2019
  16. 16.
    Lantz, B., Heller, B., McKeown, N.: A network in a laptop: Rapid prototyping for software-defined networks. In: Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks, Hotnets-IX, pp. 19:1–19:6. ACM, New York, NY, USA (2010).
  17. 17.
    Wette, P., Dräxler, M., Schwabe, A.: Maxinet: Distributed emulation of software-defined networks. In: Proceedings of the Networking Conference, 2014 IFIP, pp. 1–9 (2014).
  18. 18.
    White, B., Lepreau, J., Stoller, L., Ricci, R., Guruprasad, S., Newbold, M., Hibler, M., Barb, C., Joglekar, A.: An integrated experimental environment for distributed systems and networks. In: Proceedings of the Fifth Symposium on Operating Systems Design and Implementation, pp. 255–270. USENIX Association, Boston, MA (2002)Google Scholar
  19. 19.
    Adjih, C., Baccelli, E., Fleury, E., Harter, G., Mitton, N., Noel, T., Pissard-Gibollet, R., Saint-Marcel, F., Schreiner, G., Vandaele, J., Watteyne, T.: Fit iot-lab: A large scale open experimental iot testbed. In: 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT), pp. 459–464 (2015).
  20. 20.
    Hemminger, S.: Network emulation with NetEm. In: Pool, M. (ed.) LCA 2005, Australia’s 6th national Linux conference ( Linux Australia, Linux Australia, Sydney NSW, Australia (2005).
  21. 21.
    Bailey, D., Barszcz, E., Barton, J., Browning, D., Carter, R., Dagum, L., Fatoohi, R., Fineberg, S., Frederickson, P., Lasinski, T., Schreiber, R., Simon, H., Venkatakrishnan, V., Weeratunga, S.: The NAS Parallel Benchmarks. Report RNR-94-007, Department of Mathematics and Computer Science, Emory University (1994)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de ComputaciónBuenos AiresArgentina
  2. 2.CONICET, Centro de Simulación Computacional p/Aplic. Tecnológicas (CSC)Buenos AiresArgentina

Personalised recommendations