DASH: Data Structures and Algorithms with Support for Hierarchical Locality

  • Karl Fürlinger
  • Colin Glass
  • Jose Gracia
  • Andreas Knüpfer
  • Jie Tao
  • Denis Hünich
  • Kamran Idrees
  • Matthias Maiterth
  • Yousri Mhedheb
  • Huan Zhou
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8806)


DASH is a realization of the PGAS (partitioned global address space) model in the form of a C++ template library. Operator overloading is used to provide global-view PGAS semantics without the need for a custom PGAS (pre-)compiler. The DASH library is implemented on top of our runtime system DART, which provides an abstraction layer on top of existing one-sided communication substrates. DART contains methods to allocate memory in the global address space as well as collective and one-sided communication primitives. To support the development of applications that exploit a hierarchical organization, either on the algorithmic or on the hardware level, DASH features the notion of teams that are arranged in a hierarchy. Based on a team hierarchy, the DASH data structures support locality iterators as a generalization of the conventional local/global distinction found in many PGAS approaches.


High Performance Computing Template Library Distribute Memory Machine Data Container Global Address Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Allen, E., Chase, D., Hallett, J., Luchangco, V., Maessen, J.-W., Ryu, S., Steele Jr., G.L., Tobin-Hochstadt, S.: The fortress language specification. sun microsystems. Sun Microsystems (September 2006)Google Scholar
  2. 2.
    Bauer, M., Clark, J., Schkufza, E., Aiken, A.: Programming the memory hierarchy revisited: Supporting irregular parallelism in Sequoia. In: Proceedings of the 16th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2011), pp. 13–24 (2011)Google Scholar
  3. 3.
    Bikshandi, G., Guo, J., Hoeflinger, D., Almasi, G., Fraguela, B.B., Garzarán, M.J., Padua, D., von Praun, C.: Programming for parallelism and locality with hierarchically tiled arrays. In: Proceedings of the 11th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2006), pp. 48–57. ACM (2006)Google Scholar
  4. 4.
    Bonachea, D.: GASNet specification, v1. Univ. California, Berkeley, Tech. Rep. UCB/CSD-02-1207 (2002)Google Scholar
  5. 5.
    Buss, A., Papadopoulos, I., Pearce, O., Smith, T., Tanase, G., Thomas, N., Xu, X., Bianco, M., Amato, N.M., Rauchwerger, L., et al.: STAPL: standard template adaptive parallel library. In: Proceedings of the 3rd Annual Haifa Experimental Systems Conference, p. 14. ACM (2010)Google Scholar
  6. 6.
    Chamberlain, B.L., Callahan, D., Zima, H.P.: Parallel programmability and the Chapel language. International Journal of High Performance Computing Applications 21, 291–312 (2007)CrossRefGoogle Scholar
  7. 7.
    Charles, P., Grothoff, C., Saraswat, V., Donawa, C., Kielstra, A., Ebcioglu, K., Von Praun, C., Sarkar, V.: X10: an object-oriented approach to non-uniform cluster computing. ACM Sigplan Notices 40(10), 519–538 (2005)CrossRefGoogle Scholar
  8. 8.
    Dally, B.: Power, programmability, and granularity: The challenges of exascale computing. In: IPDPS 2011 Keynote Address (2011)Google Scholar
  9. 9.
    DASH project webpage,
  10. 10.
    Fatahalian, K., Knight, T.J., Houston, M., Erez, M., Horn, D.R., Leem, L., Park, J.Y., Ren, M., Aiken, A., Dally, W.J., Hanrahan, P.: Sequoia: Programming the memory hierarchy. In: Proceedings of the 2006 International Conference for High Performance Computing, Networking, Storage and Analysis (SC 2006) (2006)Google Scholar
  11. 11.
    Fraguela, B.B., Bikshandi, G., Guo, J., Garzarán, M.J., Padua, D., Von Praun, C.: Optimization techniques for efficient HTA programs. Parallel Comput. 38(9), 465–484 (2012)CrossRefGoogle Scholar
  12. 12.
    Grünewald, D., Simmendinger, C.: The GASPI API specification and its implementation GPI 2.0. In: 7th International Conference on PGAS Programming Models, Edinburgh, Scotland (2013)Google Scholar
  13. 13.
    Fidel, H.A., Amato, N.M., Rauchwerger, L.: The STAPL parallel graph library. In: LCPC, pp. 46–60 (2012)Google Scholar
  14. 14.
    Idrees, K., Niethammer, C., Esposito, A., Glass, C.W.: Evaluation of unified parallel C for molecular dynamics. In: Proceedings of the Seventh Conference on Partitioned Global Address Space Programing Models (PGAS 2013), ACM, New York (2013)Google Scholar
  15. 15.
    Johnson, T.A.: Coarray C++. In: 7th International Conference on PGAS Programming Models, Edinburgh, Scotland (2013)Google Scholar
  16. 16.
    Kamil, A.A., Yelick, K.A.: Hierarchical additions to the SPMD programming model. Technical Report UCB/EECS-2012-20, EECS Department, University of California, Berkeley (February 2012)Google Scholar
  17. 17.
    Mellor-Crummey, J., Adhianto, L., Scherer, W.N., Jin, G.: A new vision for coarray Fortran. In: Proceedings of the Third Conference on Partitioned Global Address Space Programing Models (PGAS 2009). ACM, New York (2009)Google Scholar
  18. 18.
    Nieplocha, J., Carpenter, B.: ARMCI: A portable remote memory copy library for distributed array libraries and compiler run-time systems. In: Rolim, J.D.P., et al. (eds.) IPPS-WS 1999 and SPDP-WS 1999. LNCS, vol. 1586, pp. 533–546. Springer, Heidelberg (1999)Google Scholar
  19. 19.
    Nieplocha, J., Harrison, R.J., Littlefield, R.J.: Global arrays: A nonuniform memory access programming model for high-performance computers. The Journal of Supercomputing 10, 169–189 (1996)CrossRefGoogle Scholar
  20. 20.
    Numrich, R.W., Reid, J.: Co-array Fortran for parallel programming. SIGPLAN Fortran Forum 17(2), 1–31 (1998)CrossRefGoogle Scholar
  21. 21.
    Poole, S.W., Hernandez, O., Kuehn, J.A., Shipman, G.M., Curtis, A., Feind, K.: OpenSHMEM - Toward a unified RMA model. In: Padua, D. (ed.) Encyclopedia of Parallel Computing, pp. 1379–1391. Springer, US (2011)Google Scholar
  22. 22.
    Shalf, J., Dosanjh, S., Morrison, J.: Exascale computing technology challenges. In: Palma, J.M.L.M., Daydé, M., Marques, O., Lopes, J.C. (eds.) VECPAR 2010. LNCS, vol. 6449, pp. 1–25. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  23. 23.
    SPPEXA webpage,
  24. 24.
  25. 25.
    Tanase, G., Buss, A.A., Fidel, A., Harshvardhan, Papadopoulos, I., Pearce, O., Smith, T.G., Thomas, N., Xu, X., Mourad, N., Vu, J., Bianco, M., Amato, N.M., Rauchwerger, L.: The STAPL parallel container framework. In: Proceedings of the 16th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2011), pp. 235–246 (February 2011)Google Scholar
  26. 26.
    UPC Consortium. UPC language specifications, v1.2. Tech Report LBNL-59208, Lawrence Berkeley National Lab (2005)Google Scholar
  27. 27.
    Wong, S., Stojiljkovic, G.D., Erotokritou, S., Tsouloupas, G., Manninen, P., Horak, D., Prangov, G.: PRACE training and education survey. Technical report, PRACE (December 2011),
  28. 28.
    Yan, Y., Zhao, J., Guo, Y., Sarkar, V.: Hierarchical Place Trees: A Portable Abstraction for Task Parallelism and Data Movement. In: Gao, G.R., Pollock, L.L., Cavazos, J., Li, X. (eds.) LCPC 2009. LNCS, vol. 5898, pp. 172–187. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  29. 29.
    Zheng, Y., Kamil, A., Driscoll, M.B., Shan, H., Yelick, K.: UPC++: A PGAS extension for C++. In: 28th IEEE International Parallel & Distributed Processing Symposium (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Karl Fürlinger
    • 1
  • Colin Glass
    • 2
  • Jose Gracia
    • 2
  • Andreas Knüpfer
    • 4
  • Jie Tao
    • 3
  • Denis Hünich
    • 4
  • Kamran Idrees
    • 2
  • Matthias Maiterth
    • 1
  • Yousri Mhedheb
    • 3
  • Huan Zhou
    • 2
  1. 1.Computer Science Department, MNM TeamLudwig-Maximilians-Universität (LMU) MunichMunichGermany
  2. 2.High Performance Computing Center StuttgartUniversity of StuttgartGermany
  3. 3.Steinbuch Center for ComputingKarlsruhe Institute of TechnologyGermany
  4. 4.Center for Information Services and High Performance Computing (ZIH)TU DresdenGermany

Personalised recommendations