An Approach for Semiautomatic Locality Optimizations Using OpenMP

  • Jens Breitbart
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7134)


The processing power of multicore CPUs increases at a high rate, whereas memory bandwidth is falling behind. Almost all modern processors use multiple cache levels to overcome the penalty of slow main memory; however cache efficiency is directly bound to data locality. This paper studies a possible way to incorporate data locality exposure into the syntax of the parallel programming system OpenMP. We study data locality optimizations on two applications: matrix multiplication and Gauß-Seidel stencil. We show that only small changes to OpenMP are required to expose data locality so a compiler can transform the code. Our notion of tiled loops allows developers to easily describe data locality even at scenarios with non-trivial data dependencies. Furthermore, we describe two optimization techniques. One explicitly uses a form of local memory to prevent conflict cache misses, whereas the second one modifies the wavefront parallel programming pattern with dynamically sized blocks to increase the number of parallel tasks. As an additional contribution we explore the benefit of using multiple levels of tiling.


Memory Bandwidth Tile Size Stencil Computation High Performance Fortran Loop Tiling 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ahmed, N., Mateev, N., Pingali, K.: Tiling imperfectly-nested loop nests. In: Supercomputing 2000: Proceedings of the 2000 ACM/IEEE Conference on Supercomputing (CDROM), page 31. IEEE Computer Society, Washington, DC (2000)Google Scholar
  2. 2.
    Bacon, D.F., Graham, S.L., Sharp, O.J.: Compiler transformations for high-performance computing. ACM Comput. Surv. 26(4), 345–420 (1994)CrossRefGoogle Scholar
  3. 3.
    Bader, M., Franz, R., Günther, S., Heinecke, A.: Hardware-Oriented Implementation of Cache Oblivious Matrix Operations Based on Space-Filling Curves. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Wasniewski, J. (eds.) PPAM 2007. LNCS, vol. 4967, pp. 628–638. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Culler, D., Singh, J., Gupta, A.: Parallel Computer Architecture: A Hardware/Software Approach, 1st edn. The Morgan Kaufmann Series in Computer Architecture and Design. Morgan Kaufmann (1998)Google Scholar
  5. 5.
    Datta, K., Kamil, S., Williams, S., Oliker, L., Shalf, J., Yelick, K.: Optimization and performance modeling of stencil computations on modern microprocessors. SIAM Review 51(1), 129–159 (2009)CrossRefzbMATHGoogle Scholar
  6. 6.
    Deitz, S.J., Chamberlain, B.L., Snyder, L.: High-level language support for user-defined reductions. J. Supercomput. 23(1), 23–37 (2002)CrossRefzbMATHGoogle Scholar
  7. 7.
    Gan, G., Wang, X., Manzano, J., Gao, G.R.: Tile Reduction: The First Step Towards Tile Aware Parallelization in OpenMP. In: Müller, M.S., de Supinski, B.R., Chapman, B.M. (eds.) IWOMP 2009. LNCS, vol. 5568, pp. 140–153. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  8. 8.
    McCalpin, J.D.: Memory bandwidth and machine balance in current high performance computers. In: IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter, pp. 19–25 (December 1995)Google Scholar
  9. 9.
    Pfister, G.F.: In search of clusters, 2nd edn. Prentice-Hall, Inc., Upper Saddle River (1998)Google Scholar
  10. 10.
    Scholz, S.-B.: On defining application-specific high-level array operations by means of shape-invariant programming facilities. In: APL 1998: Proceedings of the APL 1998 Conference on Array Processing Language, pp. 32–38. ACM, New York (1998)Google Scholar
  11. 11.
    Wolfe, M.J.: High Performance Compilers for Parallel Computing. Addison-Wesley Longman Publishing Co., Inc., Boston (1995)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jens Breitbart
    • 1
  1. 1.Research Group Programming Languages / MethodologiesUniversität KasselKasselGermany

Personalised recommendations