Enhanced Energy Efficiency with the Actor Model on Heterogeneous Architectures

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9687)


Due to rising energy costs, energy-efficient data centers have gained increasingly more attention in research and practice. Optimizations targeting energy efficiency are usually performed on an isolated level, either by producing more efficient hardware, by reducing the number of nodes simultaneously active in a data center, or by applying dynamic voltage and frequency scaling (DVFS). Energy consumption is, however, highly application dependent. We therefore argue that, for best energy efficiency, it is necessary to combine different measures both at the programming and at the runtime level. As there is a tradeoff between execution time and power consumption, we vary both independently to get insights on how they affect the total energy consumption. We choose frequency scaling for lowering the power consumption and heterogeneous processing units for reducing the execution time. While these options showed to be effective already in the literature, the lack of energy-efficient software in practice suggests missing incentives for energy-efficient programming. In fact, programming heterogeneous applications is a challenging task, due to different memory models of the underlying processors and the requirement of using different programming languages for the same tasks. We propose to use the actor model as a basis for efficient and simple programming, and extend it to run seamlessly on either a CPU or a GPU. In a second step, we automatically balance the load between the existing processing units. With heterogeneous actors we are able to save 40–80 % of energy in comparison to CPU-only applications, additionally increasing programmability.


Power Consumption Execution Time Energy Efficiency Graphical Processing Unit Work Actor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Agha, G.: Actors programming for the mobile cloud. In: Symposium on Parallel and Distributed Computing (ISPDCP), pp. 3–9. IEEE (2014)Google Scholar
  2. 2.
    Alpaydin, E.: Introduction to Machine Learning. MIT press, Massachusetts (2004)MATHGoogle Scholar
  3. 3.
    Barroso, L.A., Hölzle, U.: The case for energy-proportional computing. IEEE Comput. 40(12), 33–37 (2007)CrossRefGoogle Scholar
  4. 4.
    Beloglazov, A., Buyya, R., Lee, Y.C., Zomaya, A., et al.: A taxonomy and survey of energy-efficient data centers and cloud computing systems. Elsevier Adv. Comput. 82(2), 47–111 (2011)CrossRefGoogle Scholar
  5. 5.
    Cai, C., Wang, L., Khan, S.U., Tao, J.: Energy-aware high performance computing: a taxonomy study. In: International Conference on Parallel and Distributed Systems (ICPADS), pp. 953–958. IEEE (2011)Google Scholar
  6. 6.
    Cook, G.: How clean is your cloud? Report, Greenpeace International, April 2012Google Scholar
  7. 7.
    Faxén, K.F.: Wool-A work stealing library. ACM Comput. Architect. News 36(5), 93–100 (2009)CrossRefGoogle Scholar
  8. 8.
    Freeh, V.W., Lowenthal, D.K.: Using multiple energy gears in MPI programs on a power-scalable cluster. In: Symposium on Principles and Practice of Parallel Programming (PPoPP), pp. 164–173. ACM (2005)Google Scholar
  9. 9.
    Ge, R., Feng, X., Burtscher, M., Zong, Z.: PEACH: a model for performance and energy aware cooperative hybrid computing. In: Conference on Computing Frontiers, pp. 1–24. ACM (2014)Google Scholar
  10. 10.
    Hewitt, C., Bishop, P., Steiger, R.: A universal modular ACTOR formalism for artificial intelligence. In: International Joint Conference on Artificial Intelligence (IJCAI), pp. 235–245. Morgan Kaufmann Publishers (1973)Google Scholar
  11. 11.
    Hong, S., Kim, H.: An integrated GPU power and performance model. In: International Symposium on Computer Architecture (ISCA), pp. 280–289. ACM (2010)Google Scholar
  12. 12.
    Hönig, T., Eibel, C., Kapitza, R., Schröder-Preikschat, W.: SEEP: exploiting symbolic execution for energy-aware programming. In: Workshop on Power-Aware Computing and Systems (HotPower), pp. 1–4. ACM (2011)Google Scholar
  13. 13.
    Huang, S., Xiao, S., Feng, W.: On the energy efficiency of graphics processing units for scientific computing. In: International Parallel & Distributed Processing Symposium (IPDPS), pp. 1–8. IEEE (2009)Google Scholar
  14. 14.
    Kasichayanula, K., Terpstra, D., Luszczek, P., Tomov, S., Moore, S., Peterson, G.D.: Power aware computing on GPUs. In: Symposium on Application Accelerators in High-Performance Computing (SAAHPC), pp. 64–73. IEEE (2012)Google Scholar
  15. 15.
    Li, D., de Supinski, B.R., Schulz, M., Cameron, K., Nikolopoulos, D.S.: Hybrid MPI/OpenMP power-aware computing. In: International Parallel & Distributed Processing Symposium (IPDPS), pp. 1–12. IEEE (2010)Google Scholar
  16. 16.
    Luk, C.K., Hong, S., Kim, H.: Qilin: exploiting parallelism on heterogeneous multiprocessors with adaptive mapping. In: IEEE/ACM International Symposium on Microarchitecture (Micro), pp. 45–55. ACM (2009)Google Scholar
  17. 17.
    Minh, C.C., Chung, J., Kozyrakis, C., Olukotun, K.: STAMP: stanford transactional applications for multi-processing. In: International Symposium on Workload Characterization (IISWC), pp. 35–46. IEEE (2008)Google Scholar
  18. 18.
    Rofouei, M., Stathopoulos, T., Ryffel, S., Kaiser, W., Sarrafzadeh, M.: Energy-aware high performance computing with graphic processing units. In: Workshop on Power Aware Computing and Systems (HotPower), p. 11. ACM (2008)Google Scholar
  19. 19.
    Sujeeth, A., Lee, H., Brown, K., Rompf, T., Wu, M., Atreya, A., Odersky, M., Olukotun, K.: OptiML: an implicitly parallel domain-specific language for machine learning. In: International Conference on Machine Learning (ICML), pp. 609–616. ACM (2011)Google Scholar
  20. 20.
    Sujeeth, A.K., Brown, K.J., Lee, H., Rompf, T., Odersky, M., Olukotun, K.: Delite: a compiler architecture for performance-oriented embedded domain-specific languages. ACM Trans. Embed. Comput. Syst. 13(4s), 1–25 (2014)CrossRefGoogle Scholar
  21. 21.
    Sujeeth, A.K., Rompf, T., Brown, K.J., Lee, H.J., Chafi, H., Popic, V., Wu, M., Prokopec, A., Jovanovic, V., Odersky, M., Olukotun, K.: Composition and reuse with compiled domain-specific languages. In: Castagna, G. (ed.) ECOOP 2013. LNCS, vol. 7920, pp. 52–78. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  22. 22.
    Takizawa, H., Sato, K.: SPRAT: runtime processor selection for energy-aware computing. In: International Conference on Cluster Computing (Cluster), pp. 386–393. IEEE (2008)Google Scholar
  23. 23.
    Topcuouglu, H., Hariri, S., Wu, M.Y.: Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Syst. 13(3), 260–274 (2002)CrossRefGoogle Scholar
  24. 24.
    Trefethen, A.E., Thiyagalingam, J.: Energy-aware software: challenges, opportunities and strategies. Elsevier J. Comput. Sci. 4(6), 444–449 (2013)CrossRefGoogle Scholar
  25. 25.
    Yang, C., Wang, F., Du, Y., Chen, J., Liu, J., Yi, H., Lu, K.: Adaptive optimization for petascale heterogeneous CPU/GPU computing. In: International Conference on Cluster Computing (Cluster), pp. 19–28. IEEE (2010)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2016

Authors and Affiliations

  1. 1.University of NeuchâtelNeuchâtelSwitzerland

Personalised recommendations