Simplifying and implementing service level objectives for stream parallelism

  • Dalvan GrieblerEmail author
  • Adriano Vogel
  • Daniele De Sensi
  • Marco Danelutto
  • Luiz G. Fernandes


An increasing attention has been given to provide service level objectives (SLOs) in stream processing applications due to the performance and energy requirements, and because of the need to impose limits in terms of resource usage while improving the system utilization. Since the current and next-generation computing systems are intrinsically offering parallel architectures, the software has to naturally exploit the architecture parallelism. Implement and meet SLOs on existing applications is not a trivial task for application programmers, since the software development process, besides the parallelism exploitation, requires the implementation of autonomic algorithms or strategies. This is a system-oriented programming approach and requires the management of multiple knobs and sensors (e.g., the number of threads to use, the clock frequency of the cores, etc.) so that the system can self-adapt at runtime. In this work, we introduce a new and simpler way to define SLO in the application’s source code, by abstracting from the programmer all the details relative to self-adaptive system implementation. The application programmer specifies which parts of the code to parallelize and the related SLOs that should be enforced. To reach this goal, source-to-source code transformation rules are implemented in our compiler, which automatically generates self-adaptive strategies to enforce, at runtime, the user-expressed objectives. The experiments highlighted promising results with simpler, effective, and efficient SLO implementations for real-world applications.


Parallel programming Stream processing Self-adaptive Domain-specific language Power-aware computing 



This study was partially funded by the Coordenação de Aperfeiço-amento de Pessoal de Nivel Superior-Brasil (CAPES)-Finance Code 001 and by the FAPERGS 01/2017-ARD Project ParaElastic (No. 17/2551-0000871-5). We would like to thank Laboratório de Alto Desempenho (LAD) from PUCRS for partially providing computing resources.


  1. 1.
    Aldinucci M, Danelutto M, Kilpatrick P, Torquati M (2014) FastFlow: high-level and efficient streaming on multi-core. In: Programming multi-core and many-core computing systems, vol 1, PDC. Wiley, p 14Google Scholar
  2. 2.
    Aldinucci M, Meneghin M, Torquati M (2010) Efficient smith-waterman on multi-core with fastflow. In: Proceedings of the Euromicro Conference on Parallel, Distributed and Network-Based Processing, pp 195–199Google Scholar
  3. 3.
    Alessi F, Thoman P, Georgakoudis G, Fahringer T, Nikolopoulos DS (2015) Application-level energy awareness for OpenMP. In: International workshop on OpenMP. Springer, pp 219–232Google Scholar
  4. 4.
    Andrade HCM, Gedik B, Turaga DS (2014) Fundamentals of stream processing. Cambridge University Press, New YorkCrossRefGoogle Scholar
  5. 5.
    Ansel J, Pacula M, Wong YL, Chan C, Olszewski M, O’Reilly U-M, Amarasinghe S (2012) Siblingrivalry. In: Proceedings of the 2012 International Conference on Compilers, Architectures and Synthesis for Embedded Systems-CASES ’12. ACM Press, New York, pp 91Google Scholar
  6. 6.
    Beyer B, Jones C, Petoff J, Murphy NR (2016) Site reliability engineering. O’Reilly, BostonGoogle Scholar
  7. 7.
    Buyya R, Vecchiola C, Selvi T (2013) Mastering cloud computing. McGraw Hill, New YorkGoogle Scholar
  8. 8.
    Chandrakasan AP, Brodersen RW (1995) Minimizing power consumption in digital CMOS circuits. Proc IEEE 83(4):498–523CrossRefGoogle Scholar
  9. 9.
    CPUlimit (2018) CPU Usage Limiter for Linux roadmap. Last access Dec, 2018
  10. 10.
    Curtis-Maury M, Blagojevic F, Antonopoulos CD, Nikolopoulos DS (2008) Prediction-based power-performance adaptation of multithreaded scientific codes. IEEE Trans Parallel Distrib Syst 19(10):1396–1410CrossRefGoogle Scholar
  11. 11.
    Danelutto M, Garcia JD, Sanchez LM, Sotomayor R, Torquati M (2016) Introducing parallelism by using REPARA C++11 attributes. In: Euromicro International Conference on Parallel, Distributed, and Network-Based Processing. IEEE, pp 354–358Google Scholar
  12. 12.
    De Sensi D, De Matteis T, Danelutto M (2018) Simplifying self-adaptive and power-aware computing with Nornir. Future Gener Comput Syst 87:136–151CrossRefGoogle Scholar
  13. 13.
    De Sensi D, Torquati M, Danelutto M (2016) A reconfiguration algorithm for power–aware parallel applications. ACM Trans Archit Code Optim 13(4):43:1–43:25CrossRefGoogle Scholar
  14. 14.
    Delimitrou C, Kozyrakis C (2014) Quasar: resource-efficient and qos-aware cluster management. SIGARCH Comput Archit News 42(1):127–144Google Scholar
  15. 15.
    Ding Y, Kandemir M, Raghavan P, Irwin MJ (2008) A helper thread based edp reduction scheme for adapting application execution in cmps. In: IEEE International Symposium on Parallel and Distributed Processing, 2008. IPDPS 2008, pp 1–14Google Scholar
  16. 16.
    Floratou A, Agrawal A, Graham B, Rao S, Ramasamy K (2017) Dhalion: self-regulating stream processing in heron. Proc VLDB Endow 10:1825–1836CrossRefGoogle Scholar
  17. 17.
    Griebler D, Danelutto M, Torquati M, Fernandes LG (2017) SPar: a DSL for high-level and productive stream parallelism. Parallel Proc Lett 27(01):1740005MathSciNetCrossRefGoogle Scholar
  18. 18.
    Griebler D, De Sensi D, Vogel A, Danelutto M, Fernandes LG (2018) Service level objectives via C++11 attributes. In: Euro-Par 2018: parallel processing workshops. Springer, Turin, pp 12Google Scholar
  19. 19.
    Griebler D, Hoffmann RB, Danelutto M, Fernandes LG (2017) Higher-level parallelism abstractions for video applications with SPar. In: Parallel Computing is Everywhere, Proceedings of the International Conference on Parallel Computing, ParCo’17. IOS Press, Bologna, pp 698–707Google Scholar
  20. 20.
    Griebler D, Hoffmann RB, Danelutto M, Fernandes LG (2018) High-level and productive stream parallelism for Dedup, Ferret, and Bzip2. Int J Parallel Program 47:253–271CrossRefGoogle Scholar
  21. 21.
    Hellerstein JL, Diao Y, Parekh S, Tilbury DM (2004) Feedback control of computing systems. Wiley, New YorkCrossRefGoogle Scholar
  22. 22.
    Hoffmann H, Sidiroglou S, Carbin M, Misailovic S, Agarwal A, Rinard M (2011) Dynamic knobs for responsive power-aware computing. SIGPLAN Not 46(3):199–212CrossRefGoogle Scholar
  23. 23.
    Kephart JO, Chess DM (2003) The vision of autonomic computing. Computer 36(1):41–50MathSciNetCrossRefGoogle Scholar
  24. 24.
    Li J, Martínez JF (2006) Dynamic power-performance adaptation of parallel computation on chip multiprocessors. In: Proceedings of the International Symposium on High-Performance Computer Architecture, pp 77–87Google Scholar
  25. 25.
    Maggio M, Hoffmann H, Santambrogio MD, Agarwal A, Leva A (2010) Controlling software applications via resource allocation within the heartbeats framework. In: IEEE Conference on Decision and Control. IEEE, pp 3736–3741Google Scholar
  26. 26.
    Maurer J, Wong M (2008) Towards support for attributes in C++ (revision 6). Technical report, The C++ Standards CommitteeGoogle Scholar
  27. 27.
    McCool M, Robison AD, Reinders J (2012) Structured parallel programming: patterns for efficient computation. Morgan Kaufmann, BurlingtonGoogle Scholar
  28. 28.
    Petrica P, Izraelevitz AM, Albonesi DH, Shoemaker CA (2013) Flicker: a dynamically adaptive architecture for power limited multicore systems. ACM SIGARCH Comput Archit News 41(3):13CrossRefGoogle Scholar
  29. 29.
    Pusukuri KK, Gupta R, Bhuyan LN (2011) Thread reinforcer: dynamically determining number of threads via os level monitoring. In: Proceedings of the 2011 IEEE international symposium on workload characterization, IISWC ’11. IEEE Computer Society, Washington, DC, pp 116–125Google Scholar
  30. 30.
    Reinders J (2007) Intel threading building blocks. O’Reilly, New YorkGoogle Scholar
  31. 31.
    Shafik RA, Das A, Yang S, Merrett G, Al-Hashimi BM (2015) Adaptive energy minimization of OpenMP parallel applications on many-core systems. In: Parallel programming and run-time management techniques, pp 19–24Google Scholar
  32. 32.
    Sridharan S, Gupta G, Sohi GS (2013) Holistic run-time parallelism management for time and energy efficiency. In: Proceedings of the 27th International ACM Conference on International Conference on Supercomputing-ICS ’13. ACM Press, New York, pp 337Google Scholar
  33. 33.
    Sturm R, Morris W, Jander M (2000) Foundations of service level management. SAMS, BostonGoogle Scholar
  34. 34.
    Suleman MA, Qureshi MK, Patt YN (2008) Feedback-driven threading. In: Proceedings of the 13th International Conference on Architectural Support for Programming Languages and Operating Systems-ASPLOS XIII, vol 42. ACM Press, New York, pp 277Google Scholar
  35. 35.
    Thies W, Karczmarek M, Amarasinghe SP (2002) StreamIt: a language for streaming applications. In: Proceedings of the International Conference on Compiler Construction. Springer, Grenoble, pp 179–196Google Scholar
  36. 36.
    Totoni E, Jain N, Kalé LV (2015) Power management of extreme-scale networks with on/off links in runtime systems. TOPC 1(2):16CrossRefGoogle Scholar
  37. 37.
    Totoni E, Torrellas J, Kale LV (2014) Using an adaptive hpc runtime system to reconfigure the cache hierarchy. In: Proceedings of SC 2014. IEEE Press, pp 1047–1058Google Scholar
  38. 38.
    Vassiliadis V, Parasyris K, Chalios C, Antonopoulos CD, Lalis S, Bellas N, Vandierendonck H, Nikolopoulos DS (2015) A programming model and runtime system for significance-aware energy-efficient computing. SIGPLAN Not 50(8):275–276CrossRefGoogle Scholar
  39. 39.
    Vogel A, Griebler D, Sensi DD, Danelutto M, Fernandes LG (2018) Autonomic and latency-aware degree of parallelism management in SPar. In: Euro-Par 2018: parallel processing workshop. Springer, Turin, pp 12Google Scholar
  40. 40.
    Weyuker EJ (1988) Evaluating software complexity measures. IEEE Trans Softw Eng 14(9):1357–1365MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of TechnologyPontifical Catholic University of Rio Grande do Sul (PUCRS)Porto AlegreBrazil
  2. 2.Department of Computer ScienceUniversity of Pisa (UNIPI)PisaItaly
  3. 3.Laboratory of Advanced Research on Cloud Computing (LARCC)Três de Maio Faculty (SETREM)Três de MaioBrazil

Personalised recommendations