Introduction to Dynamic Code Generation: An Experiment with Matrix Multiplication for the STHORM Platform

  • Damien Couroussé
  • Victor Lomüller
  • Henri-Pierre Charles


Since the early beginning of computer history, one has needed programming languages as an intermediate representation between algorithms description and machine-readable instructions. In broad outline, running an algorithm on a computer requires the following steps: (1—software development, implementation) the developer transcribes the algorithm into a source file containing programming language instructions, (2—compilation) a compiler translates these programming language instructions into machine code and performs adaptations to the original code for optimized fit to the target execution support, and (3—execution) the processor reads and executes the machine instructions, loads the input data and produces the data results.


Virtual Machine Code Generation Binary Code Processing Kernel Machine Code 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    J. Aycock, “A brief history of just-in-time,” ACM Comput. Surv., vol. 35, pp. 97–113, June 2003.Google Scholar
  2. 2.
    P2012DAC12 T. Kotzmann, C. Wimmer, H. Mössenböck, T. Rodriguez, K. Russell, and D. Cox, “Design of the java hotspot client compiler for java 6,” ACM Trans. Archit. Code Optim., vol. 5, no. 1, pp. 7:1–7:32, May 2008.Google Scholar
  3. 3.
    H.-P. Charles, “Basic infrastructure for dynamic code generation,” in workshop “Dynamic Compilation Everywhere”, in conjunction with the 7th HiPEAC conference, H.-P. Charles, P. Clauss, and F. Pétrot, Eds., Paris, France, january 2012.Google Scholar
  4. 4.
    D. Couroussé and H.-P. Charles, “Dynamic code generation: An experiment on matrix multiplication,” in Proceedings of the Work-in-Progress Session, LCTES 2012, June 2012.Google Scholar
  5. 5.
    A. Gal, C. W. Probst, and M. Franz, “HotpathVM: an effective JIT compiler for resource-constrained devices,” in VEE ’06. New York, NY, USA: ACM, 2006, pp. 144–153.Google Scholar
  6. 6.
    N. Shaylor, “A just-in-time compiler for memory-constrained low-power devices,” in Java VM’02. Berkeley, CA, USA: USENIX Association, 2002, pp. 119–126.Google Scholar
  7. 7.
    C. Consel and F. Noël, “A general approach for run-time specialization and its application to C,” in Proceedings of the 23th Annual Symposium on Principles of Programming Languages, 1996, pp. 145–156.Google Scholar
  8. 8.
    N. D. Jones, “An introduction to partial evaluation,” ACM Comput. Surv., vol. 28, pp. 480–503, September 1996.Google Scholar
  9. 9.
    K. Brifault and H.-P. Charles, “Efficient data driven run-time code generation,” in Proc. of Seventh Workshop on Languages, Compilers, and Run-time Support for Scalable Systems, Houston, Texas, USA, Oct. 2004.Google Scholar
  10. 10.
    D. R. Engler, W. C. Hsieh, and M. F. Kaashoek, “‘C: A Language for High-Level, Efficient, and Machine-independent Dynamic Code Generation,” in In Symposium on Principles of Programming Languages, 1996, pp. 131–144.Google Scholar
  11. 11.
    B. Grant, M. Mock, M. Philipose, C. Chambers, and S. J. Eggers, “DyC: an expressive annotation-directed dynamic compiler for C,” Theor. Comput. Sci., vol. 248, no. 1–2, pp. 147–199, Oct. 2000. [Online]. Available:
  12. 12.
    M. Leone and P. Lee, “Lightweight Run-Time Code Generation,” Department of Computer Science, University of Melbourne, Tech. Rep., 1994.Google Scholar
  13. 13.
    C. Consel, L. Hornof, R. Marlet, G. Muller, S. Thibault, E.-N. Volanschi, J. Lawall, and J. Noyé, “Tempo: specializing systems applications and beyond,” ACM Comput. Surv., vol. 30, no. 3es, Sep. 1998. [Online]. Available:
  14. 14.
    M. Mock, C. Chambers, and S. J. Eggers, “Calpa: a tool for automating selective dynamic compilation,” in Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, ser. MICRO 33. New York, NY, USA: ACM, 2000, pp. 291–302. [Online]. Available:
  15. 15.
    B. Schwarz, S. Debray, G. Andrews, and M. Legendre, “Plto: A link-time optimizer for the intel ia-32 architecture,” in In Proc. 2001 Workshop on Binary Translation (WBT-2001, 2001.Google Scholar
  16. 16.
    B. De Sutter, B. De Bus, and K. De Bosschere, “Sifting out the mud: low level c\(++\) code reuse,” SIGPLAN Not., vol. 37, no. 11, pp. 275–291, Nov. 2002. [Online]. Available:
  17. 17.
    R. C. Whaley and J. Dongarra, “Automatically tuned linear algebra software,” in SuperComputing 1998: High Performance Networking and Computing, 1998. [Online]. Available:
  18. 18.
    M. Frigo and S. G. Johnson, “Fftw: An adaptive software architecture for the FFT,” in Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, vol. 3, Seattle, WA, May 1998, pp. 1381–1384. [Online]. Available:
  19. 19.
    M. Leone and P. Lee, “A Declarative Approach to Run-Time Code Generation,” in In Workshop on Compiler Support for System Software (WCSSS, 1996, pp. 8–17.Google Scholar
  20. 20.
    C. Lattner, “LLVM: An Infrastructure for Multi-Stage Optimization,” Master’s thesis, Computer Science Dept., University of Illinois at Urbana-Champaign, Urbana, IL, 2002.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Damien Couroussé
    • 1
  • Victor Lomüller
    • 1
  • Henri-Pierre Charles
    • 1
  1. 1.CEA, LIST, DACLE/LIALPGrenobleFrance

Personalised recommendations