Comparing Machine Learning Approaches for Context-Aware Composition

  • Antonina Danylenko
  • Christoph Kessler
  • Welf Löwe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6708)


Context-Aware Composition allows to automatically select optimal variants of algorithms, data-structures, and schedules at runtime using generalized dynamic Dispatch Tables. These tables grow exponentially with the number of significant context attributes. To make Context-Aware Composition scale, we suggest four alternative implementations to Dispatch Tables, all well-known in the field of machine learning: Decision Trees, Decision Diagrams, Naive Bayes and Support Vector Machines classifiers. We assess their decision overhead and memory consumption theoretically and practically in a number of experiments on different hardware platforms. Decision Diagrams turn out to be more compact compared to Dispatch Tables, almost as accurate, and faster in decision making. Using Decision Diagrams in Context-Aware Composition leads to a better scalability, i.e., Context-Aware Composition can be applied at more program points and regard more context attributes than before.


Context-Aware Composition Autotuning Machine Learning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hirschfeld, R., Costanza, P., Nierstrasz, O.: Context-oriented programming. Journal of Obj. Tech. ETH Zurich 7(3), 125–151 (2008)CrossRefGoogle Scholar
  2. 2.
    Costanza, P., Hirschfeld, R.: Language constructs for context-oriented programming: an overview of contextl. In: Proc. of the 2005 Symposium on Dynamic Lang, pp. 1–10. ACM, New York (2005)Google Scholar
  3. 3.
    Nilsson, N.J.: Introduction to machine learning: An early draft of proposed text book. Stanford University, Stanford (1996)Google Scholar
  4. 4.
    Han, J., Kamber, M.: Data Mining: Concepts and Techniques, 2nd edn. The Morgan Kaufmann Series in Data Management Systems. Morgan Kaufmann, San Francisco (2000)zbMATHGoogle Scholar
  5. 5.
    Moshkov, M.: Algorithms for constructing of decision trees. In: Komorowski, J., Żytkow, J.M. (eds.) PKDD 1997. LNCS, vol. 1263, pp. 335–342. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  6. 6.
    Rokach, L., Maimon, O.: Data Mining with Decision Trees: Theory and Applications. World Scientific, Singapore (2008)zbMATHGoogle Scholar
  7. 7.
    Bryant, R.E.: Symbolic boolean manipulation with ordered binary-decision diagrams. ACM Computing Surveys 24, 293–318 (1992)CrossRefGoogle Scholar
  8. 8.
    Bryant, R.E.: Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers 35, 677–691 (1986)CrossRefzbMATHGoogle Scholar
  9. 9.
    Johnson, S.: Branching programs and binary decision diagrams: theory and applications by Ingo Wegener society for industrial and applied mathematics, vol. 41, pp. 36–38. ACM, New York (2010)Google Scholar
  10. 10.
    Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  11. 11.
    Keogh, E.J., Pazzani, M.J.: Learning augmented bayesian classifiers: A comparison of distribution-based and classification-based approaches (1999)Google Scholar
  12. 12.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20, 273–297 (1995)zbMATHGoogle Scholar
  13. 13.
    Chang, C.-C., Lin, C.-J.: Libsvm – a library for support vector machines. National Taiwan University, Dep. of Comp. Science and Inf. Eng. (2001)Google Scholar
  14. 14.
    Hsu, C.-W., Chang, C.-C., Lin, C.-J.: A practical guide to support vector classification. National Taiwan University, Dep. of Comp. Science, Tech. Rep. (2003)Google Scholar
  15. 15.
    Kramer, K.A., Hall, L.O., Goldgof, D.B., Remsen, A., Luo, T.: Fast support vector machines for continuous data. Trans. Sys. Man Cyber. 39, 989–1001 (2009)CrossRefGoogle Scholar
  16. 16.
    Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms. The MIT Press, New York (2001)zbMATHGoogle Scholar
  17. 17.
    Quinlan, J.R.: C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1993)Google Scholar
  18. 18.
    Andersson, J., Ericsson, M., Kessler, C.W., Löwe, W.: Profile-guided composition. In: Pautasso, C., Tanter, É. (eds.) SC 2008. LNCS, vol. 4954, pp. 157–164. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  19. 19.
    von Löwis, M., Denker, M., Nierstrasz, O.: Context-oriented programming: beyond layers. In: Proc. of the 2007 Int. Conf. on Dynamic lang.: in conjunction with the 15th Int. Smalltalk Joint Conf. 2007, pp. 143–156. ACM, New York (2007)Google Scholar
  20. 20.
    Moura, J.M.F., Johnson, J., Johnson, R.W., Padua, D., Prasanna, V.K., Püschel, M., Singer, B., Veloso, M., Xiong, J.: Generating platform-adapted DSP libraries using SPIRAL. In: High Performance Embedded Computing, HPEC (2001)Google Scholar
  21. 21.
    Moura, J.M.F., Johnson, J., Johnson, R.W., Padua, D., Prasanna, V.K., Püschel, M., Veloso, M.: SPIRAL: Automatic implementation of signal processing algorithms. In: High Performance Embedded Computing, HPEC (2000)Google Scholar
  22. 22.
    Li, X., Garzarán, M.J., Padua, D.: A dynamically tuned sorting library. In: Proc. CGO 2004, Palo Alto, CA, USA, pp. 111–124 (2004)Google Scholar
  23. 23.
    Brewer, E.A.: High-level optimization via automated statistical modeling. In: PPoPP 1995 (1995)Google Scholar
  24. 24.
    Thomas, N., Tanase, G., Tkachyshyn, O., Perdue, J., Amato, N.M., Rauchwerger, L.: A framework for adaptive algorithm selection in STAPL. In: Proc. ACM SIGPLAN Symp. on Princ. and Pract. of Parallel Programming, pp. 277–288. ACM, New York (2005)Google Scholar
  25. 25.
    Yu, H., Rauchwerger, L.: An adaptive algorithm selection framework for reduction parallelization. IEEE Trans. Par. Distr. Syst. 17, 1084–1096 (2006)CrossRefGoogle Scholar
  26. 26.
    Kessler, C., Löwe, W.: A framework for performance-aware composition of explicitly parallel components. In: Bischof, C., et al. (eds.) ParCo-2007: Jülich/Aachen Parallel Computing: Architectures, Algorithms and Applications. Advances in Parallel Computing Series, vol. 15, pp. 227–234. IOS Press, Amsterdam (2008)Google Scholar
  27. 27.
    Kessler, C., Löwe, W.: Optimized composition of performance-aware parallel components. In: 15th Workshop on Compilers for Parallel Computing CPC, July 7-9. University of Technology, Vienna (2010)Google Scholar
  28. 28.
    Olszewski, M., Voss, M.: An install-time system for the automatic generation of optimized parallel sorting algorithms. In: Proc. PDPTA 2004, vol. 1 (June 2004)Google Scholar
  29. 29.
    Ansel, J., Chan, C.P., Wong, Y.L., Olszewski, M., Zhao, Q., Edelman, A., Amarasinghe, S.P.: PetaBricks: a language and compiler for algorithmic choice. In: Proc. ACM SIGPLAN Conf. on Progr. Language Design and Implem, pp. 38–49. ACM, New York (2009)Google Scholar
  30. 30.
    Wernsing, J.R., Stitt, G.: Elastic computing: a framework for transparent, portable, and adaptive multi-core heterogeneous computing. In: Proc. ACM Conf. on Lang, compilers, and tools for embedded systems (LCTES 2010), pp. 115–124. ACM, New York (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Antonina Danylenko
    • 1
  • Christoph Kessler
    • 2
  • Welf Löwe
    • 1
  1. 1.Software Technology GroupLinnaeus UniversityVäxjöSweden
  2. 2.Department for Computer and Information ScienceLinköping UniversityLinköpingSweden

Personalised recommendations