ompVerify: Polyhedral Analysis for the OpenMP Programmer

  • V. Basupalli
  • T. Yuki
  • S. Rajopadhye
  • A. Morvan
  • S. Derrien
  • P. Quinton
  • D. Wonnacott
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6665)


We describe a static analysis tool for OpenMP programs integrated into the standard open source Eclipse IDE. It can detect an important class of common data-race errors in OpenMP parallel loop programs by flagging incorrectly specified omp parallel for directives and data races. The analysis is based on the polyhedral model, and covers a class of program fragments called Affine Control Loops (ACLs, or alternatively, Static Control Parts, SCoPs). ompVerify automatically extracts such ACLs from an input C program, and then flags the errors as specific and precise error messages reported to the user. We illustrate the power of our techniques through a number of simple but non-trivial examples with subtle parallelization errors that are difficult to detect, even for expert OpenMP programmers.


Parallel Programming Iteration Space Statement Instance Data Race Parallel Loop 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Petersen, P., Shah, S.: OpenMP support in the Intel® thread checker. OpenMP Shared Memory Parallel Programming, 1–12 (2003)Google Scholar
  2. 2.
    Cownie, J., Moore, S., et al.: Portable OpenMP debugging with totalview. In: Proceedings of the Second European Workshop on OpenMP (EWOMP 2000), Citeseer (2000)Google Scholar
  3. 3.
    Rajopadhye, S.V., Purushothaman, S., Fujimoto, R.M.: On synthesizing systolic arrays from recurrence equations with linear dependencies. In: Nori, K.V. (ed.) FSTTCS 1986. LNCS, vol. 241, pp. 488–503. Springer, Heidelberg (1986); later appeared in Parallel Computing (June 1990)CrossRefGoogle Scholar
  4. 4.
    Feautrier, P.: Some efficient solutions to the affine scheduling problem. I. One-dimensional time. International Journal of Parallel Programming 21(5), 313–347 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Feautrier, P.: Some efficient solutions to the affine scheduling problem. Part II. Multidimensional time. International Journal of Parallel Programming 21(6), 389–420 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Feautrier, P.: Dataflow analysis of array and scalar references. International Journal of Parallel Programming 20(1), 23–53 (1991)CrossRefzbMATHGoogle Scholar
  7. 7.
    Pugh, W.: The Omega test: a fast and practical integer programming algorithm for dependence analysis. In: Proceedings of the 1991 ACM/IEEE Conference on Supercomputing, p. 13. ACM, New York (1991)Google Scholar
  8. 8.
    Pugh, W., Wonnacott, D.: Eliminating false data dependences using the Omega test. In: Proceedings of the ACM SIGPLAN 1992 Conference on Programming Language Design and Implementation, ser. PLDI 1992, pp. 140–151. ACM, New York (1992), CrossRefGoogle Scholar
  9. 9.
    Pugh, W., Wonnacott, D.: Constraint-based array dependence analysis. ACM Trans. Program. Lang. Syst. 20, 635–678 (1998), CrossRefGoogle Scholar
  10. 10.
    Quilleré, F., Rajopadhye, S., Wilde, D.: Generation of efficient nested loops from polyhedra. International Journal of Parallel Programming 28(5), 469–498 (2000)CrossRefGoogle Scholar
  11. 11.
    Bastoul, C.: Code generation in the polyhedral model is easier than you think. In: PACT’13: IEEE International Conference on Parallel Architectures and Compilation and Techniques, Juan-les-Pins, pp. 7–16 (September 2004)Google Scholar
  12. 12.
    Wikipedia, Frameworks supporting the polyhedral model — wikipedia, the free encyclopedia (2011), supporting the polyhedral model (accessed March 21, 2011)
  13. 13.
    Asanovic, K., Bodik, R., Catanzaro, B., Gebis, J., Keutzer, K., Patterson, D., Plishker, W., Shalf, J., Williams, S., Yelick, K.: The landscape of parallel computing research: A view from berkeley. EECS, University of California, Berkeley, Tech. Rep. UCB/EECS-2006-183 (December 2006)Google Scholar
  14. 14.
    Verdoolaege, S.: ISL,
  15. 15.
    Süß, M., Leopold, C.: Common mistakes in OpenMP and how to avoid them. In: OpenMP Shared Memory Parallel Programming, pp. 312–323 (2008)Google Scholar
  16. 16.
    Amarasinghe, S.A.: Paralelizing compiler techniques based on linear inequalities. Ph.D. dissertation. Stanford University (1997)Google Scholar
  17. 17.
    Eclipse parallel tools platform,
  18. 18.
    CAIRN, IRISA, Generic compiler suite,
  19. 19.
    Benabderrahmane, M.W., Pouchet, L.-N., Cohen, A., Bastoul, C.: The polyhedral model is more widely applicable than you think. In: Compiler Construction, pp. 283–303. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  20. 20.
    Pugh, W., Wonnacott, D.: Nonlinear array dependence analysis. In: Third Workshop on Languages, Compilers, and Run-Time Systems for Scalable Computers, Troy, New York (May 1995)Google Scholar
  21. 21.
    Wonnacott, D.: Extending scalar optimizations for arrays. In: Midkiff, S.P., Moreira, J.E., Gupta, M., Chatterjee, S., Ferrante, J., Prins, J.F., Pugh, B., Tseng, C.-W. (eds.) LCPC 2000. LNCS, vol. 2017, pp. 97–111. Springer, Heidelberg (2001), CrossRefGoogle Scholar
  22. 22.
    Meister, B., Leung, A., Vasilache, N., Wohlford, D., Bastoul, C., Lethin, R.: Productivity via automatic code generation for PGAS platforms with the R-Stream compiler. In: APGAS 2009 Workshop on Asynchrony in the PGAS Programming Model, Yorktown Heights, New York (June 2009)Google Scholar
  23. 23.
    Pop, S., Cohen, A., Bastoul, C., Girbal, S., Silber, G.-A., Vasilache, N.: Graphite: Polyhedral analyses and optimizations for gcc. In: Proceedings of the 2006 GCC Developers Summit, p. 2006 (2006)Google Scholar
  24. 24.
    Satoh, S., Kusano, K., Sato, M.: Compiler optimization techniques for OpenMP programs. Scientific Programming 9(203), 131–142 (2001)CrossRefGoogle Scholar
  25. 25.
    Lin, Y.: Static nonconcurrency analysis of openMP programs. In: Mueller, M.S., Chapman, B.M., de Supinski, B.R., Malony, A.D., Voss, M. (eds.) IWOMP 2005 and IWOMP 2006. LNCS, vol. 4315, pp. 36–50. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  26. 26.
    Huang, L., Sethuraman, G., Chapman, B.: Parallel data flow analysis for openMP programs. In: Chapman, B., Zheng, W., Gao, G.R., Sato, M., Ayguadé, E., Wang, D. (eds.) IWOMP 2007. LNCS, vol. 4935, pp. 138–142. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  27. 27.
    Basumallik, A., Eigenmann, R.: Incorporation of openMP memory consistency into conventional dataflow analysis. In: Eigenmann, R., de Supinski, B.R. (eds.) IWOMP 2008. LNCS, vol. 5004, pp. 71–82. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  28. 28.
    Liao, C., Quinlan, D.J., Panas, T., de Supinski, B.R.: A ROSE-based OpenMP 3.0 research compiler supporting multiple runtime libraries. In: Sato, M., Hanawa, T., Müller, M.S., Chapman, B.M., de Supinski, B.R. (eds.) IWOMP 2010. LNCS, vol. 6132, pp. 15–28. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  29. 29.
    Kolosov, A., Ryzhkov, E., Karpov, A.: 32 OpenMP traps for C++ developers (November 2009),

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • V. Basupalli
    • 1
  • T. Yuki
    • 1
  • S. Rajopadhye
    • 1
  • A. Morvan
    • 2
  • S. Derrien
    • 2
  • P. Quinton
    • 2
  • D. Wonnacott
    • 3
  1. 1.Computer Science DepartmentColorado State UniversityUSA
  2. 2.CAIRN, IRISARennesFrance
  3. 3.Computer Science DepartmentHaverford CollegeUSA

Personalised recommendations