Autonomic and Latency-Aware Degree of Parallelism Management in SPar

  • Adriano VogelEmail author
  • Dalvan Griebler
  • Daniele De Sensi
  • Marco Danelutto
  • Luiz Gustavo Fernandes
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11339)


Stream processing applications became a representative workload in current computing systems. A significant part of these applications demands parallelism to increase performance. However, programmers are often facing a trade-off between coding productivity and performance when introducing parallelism. SPar was created for balancing this trade-off to the application programmers by using the C++11 attributes’ annotation mechanism. In SPar and other programming frameworks for stream processing applications, the manual definition of the number of replicas to be used for the stream operators is a challenge. In addition to that, low latency is required by several stream processing applications. We noted that explicit latency requirements are poorly considered on the state-of-the-art parallel programming frameworks. Since there is a direct relationship between the number of replicas and the latency of the application, in this work we propose an autonomic and adaptive strategy to choose the proper number of replicas in SPar to address latency constraints. We experimentally evaluated our implemented strategy and demonstrated its effectiveness on a real-world application, demonstrating that our adaptive strategy can provide higher abstraction levels while automatically managing the latency.


Autonomic computing Stream processing Parallel programming Adaptive degree of parallelism 



This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001, by the EU H2020-ICT-2014-1 project RePhrase (No. 644235), and by the FAPERGS 01/2017-ARD project ParaElastic (No. 17/2551-0000871-5).


  1. 1.
    Aldinucci, M., Meneghin, M., Torquati, M.: Efficient Smith-Waterman on multi-core with FastFlow. In: Euromicro Conference on Parallel, Distributed and Network-Based Processing, pp. 195–199 (2010)Google Scholar
  2. 2.
    Andrade, H., Gedik, B., Turaga, D.: Fundamentals of Stream Processing: Application Design, Systems, and Analytics. Cambridge University Press, Cambridge (2014)CrossRefGoogle Scholar
  3. 3.
    Chakravarthy, S., Qingchun, J.: Stream Data Processing: A Quality of Service Perspective: Modeling, Scheduling, Load Shedding, and Complex Event Processing. Advances in Database Systems, vol. 36. Springer, Boston (2009). Scholar
  4. 4.
    De Matteis, T., Mencagli, G.: Keep calm and react with foresight: strategies for low-latency and energy-efficient elastic data stream processing. SIGPLAN Not. 51(8), 13:1–13:12 (2016)CrossRefGoogle Scholar
  5. 5.
    De Sensi, D., De Matteis, T., Danelutto, M.: Simplifying self-adaptive and power-aware computing with Nornir. Future Gener. Comput. Syst. 87, 136–151 (2018)CrossRefGoogle Scholar
  6. 6.
    De Sensi, D., Torquati, M., Danelutto, M.: A reconfiguration algorithm for power-aware parallel applications. ACM Trans. Architect. Code Optim. 13(4), 43 (2016)Google Scholar
  7. 7.
    FastFlow : FastFlow (FF) Website (2017). Accessed Dec 2017
  8. 8.
    Gedik, B., Schneider, S., Hirzel, M., Wu, K.L.: Elastic scaling for data stream processing. IEEE Trans. Parallel Distrib. Syst. 25(6), 1447–1463 (2014)CrossRefGoogle Scholar
  9. 9.
    Griebler, D., Danelutto, M., Torquati, M., Fernandes, L.G.: SPar: a DSL for high-level and productive stream parallelism. Parallel Process. Lett. 27(1), 20 (2017)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Griebler, D., Hoffmann, R.B., Danelutto, M., Fernandes, L.G.: High-level and productive stream parallelism for Dedup, Ferret, and Bzip2. Int. J. Parallel Program., 1–19 (2018)Google Scholar
  11. 11.
    Griebler, D., Hoffmann, R.B., Danelutto, M., Fernandes, L.G.: Higher-level parallelism abstractions for video applications with SPar. In: Parallel Computing is Everywhere, Proceedings of the International Conference on Parallel Computing, ParCo 2017, pp. 698–707. IOS Press, Bologna (2017)Google Scholar
  12. 12.
    Heinze, T., Pappalardo, V., Jerzak, Z., Fetzer, C.: Auto-scaling techniques for elastic data stream processing. In: IEEE International Conference on Data Engineering Workshops, pp. 296–302 (2014)Google Scholar
  13. 13.
    Hellerstein, J.L., Diao, Y., Parekh, S., Tilbury, D.M.: Feedback Control of Computing Systems. John Wiley & Sons, Chichester (2004)CrossRefGoogle Scholar
  14. 14.
    Reinders, J.: Intel Threading Building Blocks: Outfitting C++ for Multi-core Processor Parallelism. O’Reilly Media, Sebastopol (2007)Google Scholar
  15. 15.
    Schneider, S., Hirzel, M., Gedik, B., Wu, K.L.: Auto-parallelizing stateful distributed streaming applications. In: Proceedings of the International Conference on Parallel Architectures and Compilation Techniques, pp. 53–64 (2012)Google Scholar
  16. 16.
    Selva, M., Morel, L., Marquet, K., Frenot, S.: A monitoring system for runtime adaptations of streaming applications. In: Euromicro International Conference on Parallel, Distributed and Network-Based Processing, pp. 27–34 (2015)Google Scholar
  17. 17.
    Thies, W., Karczmarek, M., Amarasinghe, S.: StreamIt: a language for streaming applications. In: Horspool, R.N. (ed.) CC 2002. LNCS, vol. 2304, pp. 179–196. Springer, Heidelberg (2002). Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Adriano Vogel
    • 1
    Email author
  • Dalvan Griebler
    • 1
    • 3
  • Daniele De Sensi
    • 2
  • Marco Danelutto
    • 2
  • Luiz Gustavo Fernandes
    • 1
  1. 1.School of TechnologyPontifical Catholic University of Rio Grande do SulPorto AlegreBrazil
  2. 2.Department of Computer ScienceUniversity of PisaPisaItaly
  3. 3.Laboratory of Advanced Research on Cloud ComputingTrês de Maio FacultyTrês de MaioBrazil

Personalised recommendations