Advertisement

Tying Memory Management to Parallel Programming Models

  • Ioannis E. Venetis
  • Theodore S. Papatheodorou
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4128)

Abstract

Stand-alone threading libraries lack sophisticated memory management techniques. In this paper, we present a methodology that allows threading libraries that implement non-preemptive parallel programming models to reduce their memory requirements, based on the properties of those models. We applied the methodology to NthLib, which is an implementation of the Nano-Threads programming model, and evaluated it on an Intel based multiprocessor system with HyperThreading and on the SMTSIM simulator. Our results indicate that not only memory requirements drop drastically, but that execution time also improves, compared to the original implementation. This allows more fine-grained, but also larger numbers of parallel tasks to be created.

Keywords

Execution Time Memory Requirement Parallel Task Memory Region Parallel Programming Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tullsen, D., Eggers, S., Levy, H.: Simultaneous Multithreading: Maximizing On-Chip Parallelism. In: Proceedings of the 22nd Annual International Symposium on Computer Architecture, S. Margherita Ligure, Italy, pp. 392–403 (1995)Google Scholar
  2. 2.
    Marr, D.T., Binns, F., Hill, D.L., Hinton, G., Koufaty, D.A., Miller, J.A., Upton, M.: Hyper-Threading Technology Architecture and Microarchitecture. Intel Technology Journal 6(1), 4–15 (2002)Google Scholar
  3. 3.
    Mohr, E., Kranz, D.A.: Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs. IEEE Transactions on Parallel and Distributed Systems 2(3), 264–280 (1991)CrossRefGoogle Scholar
  4. 4.
    Goldstein, S.C., Schauser, K.E., Culler, D.E.: Lazy Threads: Implementing a Fast Parallel Call. Journal of Parallel and Distributed Computing 37(1), 5–20 (1996)CrossRefGoogle Scholar
  5. 5.
    von Behren, R., Condit, J., Zhou, F., Necula, G., Brewer, E.: Capriccio: Scalable Threads for Internet Services. In: Proceedings of the 19th Symposium on Operating System Principles, Bolton Landing, New York, pp. 268–281 (2003)Google Scholar
  6. 6.
    Taura, K., Tabata, K., Yonezawa, A.: Stackthreads/MP: Integrating Futures into Calling Standards. Technical Report TR 99-01, University of Tokyo (1999)Google Scholar
  7. 7.
    del Cuvillo, J., Zhu, W., Hu, Z., Gao, G.R.: TiNy Threads: a Thread Virtual Machine for the Cyclops64 Cellular Architecture. In: Proceedings of the 5th Workshop on Massively Parallel Processing, Denver, Colorado (2005)Google Scholar
  8. 8.
    Venetis, I.E., Papatheodorou, T.S.: A Time and Memory Efficient Implementation of the Nano-Threads Programming Model. Technical Report HPCLAB-TR-210106, High Performance Information Systems Laboratory (2006)Google Scholar
  9. 9.
    Martorell, X., Labarta, J., Navarro, N., Ayguade, E.: A Library Implementation of the Nano-Threads Programming Model. In: Fraigniaud, P., Mignotte, A., Bougé, L., Robert, Y. (eds.) Euro-Par 1996. LNCS, vol. 1123, pp. 644–649. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  10. 10.
    Polychronopoulos, C., Bitar, N., Kleiman, S.: Nanothreads: A User-Level Threads Architecture. Technical Report 1297, CSRD, University of Illinois at Urbana-Champaign (1993)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Ioannis E. Venetis
    • 1
  • Theodore S. Papatheodorou
    • 1
  1. 1.High Performance Information Systems Laboratory, Department of Computer Engineering and InformaticsUniversity of PatrasRionGreece

Personalised recommendations