Affine Parallelization of Loops with Run-Time Dependent Bounds from Binaries

  • Aparna Kotha
  • Kapil Anand
  • Timothy Creech
  • Khaled ElWazeer
  • Matthew Smithson
  • Rajeev Barua
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8410)


An automatic parallelizer is a tool that converts serial code to parallel code. This is an important tool because most hardware today is parallel and manually rewriting the vast repository of serial code is tedious and error prone. We build an automatic parallelizer for binary code, i.e. a tool which converts a serial binary to a parallel binary. It is important because: (i) most serial legacy code has no source code available; (ii) it is compatible with all compilers and languages.

In the past binary automatic parallelization techniques have been developed and researchers have presented results on small kernels from polybench. These techniques are a good start; however they are far from parallelizing larger codes from the SPEC2006 and OMP2001 benchmark suites which are representative of real world codes. The main limitation of past techniques is the assumption that loop bounds are statically known to calculate loop dependencies. However, in larger codes loop bounds are only known at run-time; hence loop dependencies calculated statically are overly conservative making binary parallelization ineffective.

In this paper we present a novel algorithm that enhancing past techniques significantly by guessing the most likely loop bounds using only the memory expressions present in that loop. It then inserts run-time checks to see if these guesses were indeed correct and if correct executes the parallel version of the loop, else the serial version executes. These techniques are applied to the large affine benchmarks in SPEC2006 and OMP2001 and unlike previous methods the speedups from binary are as good as from source. We also present results on the number of loops parallelized directly from a binary with and without this algorithm. Among the 8 affine benchmarks among these suites, the best existing binary parallelization method achieves an average speedup of 1.74X, whereas our method achieves a speedup of 3.38X. This is close to the speedup from source code of 3.15X.


Automatic Parallelization Binary Rewriting Affine loop parallelization Run-time dependent loop bounds 


  1. 1.
    Anand, K., et al.: A compiler level intermediate representation based binary analysis and rewriting system. In: Proceedings of the 8th ACM European Conference on Computer Systems (2013)Google Scholar
  2. 2.
    Dasgupta, A., Dasgupta, A.: Vizer: A framework to analyze and vectorize intel x86 binaries (2003)Google Scholar
  3. 3.
    Franke, B., O’boyle, M.: Array recovery and high-level transformations for dsp applications. ACM Trans. Embed. Comput. Syst. (2003)Google Scholar
  4. 4.
    Yang, J., Soffa, M.L., Skadron, K., Whitehouse, K.: Feasibility of dynamic binary parallelization (2011)Google Scholar
  5. 5.
    Kotha, A., Anand, K., Smithson, M., Yellareddy, G., Barua, R.: Automatic parallelization in a binary rewriter. In: Proceedings of the 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture (2010)Google Scholar
  6. 6.
    Lattner, C., Adve, V.: LLVM: A compilation framework for lifelong program analysis & transformation. In: Proceedings of the International Symposium on CGO (2004)Google Scholar
  7. 7.
    LLVM, clang: a C language family frontend for LLVM (2007),
  8. 8.
    LLVM, DragonEgg - Using LLVM as a GCC backend (2009),
  9. 9.
    Maslov, V.: Delinearization: an efficient way to break multiloop dependence equations. In: Proc. the SIGPLAN 1992 Conference on Programming Language Design and Implementation, pp. 152–161 (1992)Google Scholar
  10. 10.
    Nakamura, T., Miki, S., Oikawa, S.: Automatic vectorization by runtime binary translation. In: Proceedings of the 2011 Second International Conference on Networking and Computing (2011)Google Scholar
  11. 11.
    O’Sullivan, P., Anand, K., Kotha, A., Smithson, M., Barua, R., Keromytis, A.D.: Retrofitting security in cots software with binary rewriting. In: Proceedings of the 26th International Information Security Conference (2011)Google Scholar
  12. 12.
    Pradelle, B., Ketterlin, A., Clauss, P.: Polyhedral parallelization of binary code. ACM Trans. Archit. Code Optim. (2012)Google Scholar
  13. 13.
    Smithson, M., Anand, K., Kotha, A., Elwazeer, K., Giles, N., Barua, R.: Binary rewriting without relocation information. Technical report, University of Maryland, College Park (2010)Google Scholar
  14. 14.
    Wang, C., Wu, Y., Borin, E., Hu, S., Liu, W., Sager, D., Ngai, T.-F., Fang, J.: Dynamic parallelization of single-threaded binary programs using speculative slicing. In: Proceedings of the 23rd International Conference on Supercomputing, ICS 2009 (2009)Google Scholar
  15. 15.
    Yardimci, E., Franz, M.: Dynamic parallelization and mapping of binary executables on hierarchical platforms. In: Proceedings of the 3rd Conference on Computing Frontiers (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Aparna Kotha
    • 1
  • Kapil Anand
    • 1
  • Timothy Creech
    • 1
  • Khaled ElWazeer
    • 1
  • Matthew Smithson
    • 1
  • Rajeev Barua
    • 1
  1. 1.University of MarylandCollege ParkUSA

Personalised recommendations