Skip to main content
Log in

Runtime and Programming Support for Memory Adaptation in Scientific Applications via Local Disk and Remote Memory

  • Published:
Journal of Grid Computing Aims and scope Submit manuscript

Abstract

The ever increasing memory demands of many scientific applications and the complexity of today’s shared computational resources still require the occasional use of virtual memory, network memory, or even out-of-core implementations, with well known drawbacks in performance and usability. In Mills et al. (Adapting to memory pressure from within scientific applications on multiprogrammed COWS. In: International Parallel and Distributed Processing Symposium, IPDPS, Santa Fe, NM, 2004), we introduced a basic framework for a runtime, user-level library, MMlib, in which DRAM is treated as a dynamic size cache for large memory objects residing on local disk. Application developers can specify and access these objects through MMlib, enabling their application to execute optimally under variable memory availability, using as much DRAM as fluctuating memory levels will allow. In this paper, we first extend our earlier MMlib prototype from a proof of concept to a usable, robust, and flexible library. We present a general framework that enables fully customizable memory malleability in a wide variety of scientific applications. We provide several necessary enhancements to the environment sensing capabilities of MMlib, and introduce a remote memory capability, based on MPI communication of cached memory blocks between ‘compute nodes’ and designated memory servers. The increasing speed of interconnection networks makes a remote memory approach attractive, especially at the large granularity present in large scientific applications. We show experimental results from three important scientific applications that require the general MMlib framework. The memory-adaptive versions perform nearly optimally under constant memory pressure and execute harmoniously with other applications competing for memory, without thrashing the memory system. Under constant memory pressure, we observe execution time improvements of factors between three and five over relying solely on the virtual memory system. With remote memory employed, these factors are even larger and significantly better than other, system-level remote memory implementations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. SciClone cluster project at the College of William and Mary, Webpage at http://compsci.wm.edu/SciClone/ (2005)

  2. Acharya, A., Edjlali, A., Saltz, J.: The utility of exploiting idle workstations for parallel computation. 25, 225–234 (1997)

    Google Scholar 

  3. Acharya, A., Setia, S.: Availability and utility of idle memory in workstation clusters. In: Proc. of the 1999 ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS’99), Atlanta, Georgia, pp. 35–46, May 1999

  4. Arpaci-Dusseau, A.: Implicit coscheduling: coordinated scheduling with implicit information in distributed systems. ACM Trans. Comput. Syst. 19(3), 283–331 (August, 2001)

    Article  Google Scholar 

  5. Barak, A., Braverman, A.: Memory ushering in a scalable computing cluster. J. Microprocess. Microsyst. 22(3,4), 175–182 (August, 1998)

    Article  Google Scholar 

  6. Barve, R.D., Vitter, J.S.: A theoretical framework for memory-adaptive algorithms. In: IEEE Symposium on Foundations of Computer Science, pp. 273–284 (1999)

  7. Batat, A., Feitelson, D.: Gang scheduling with memory considerations. In: Proc. of the 14th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2000), Cancun, Mexico, pp. 109–114, May 2000

  8. Brown, A.D., Mowry, T.C.: Taming the memory hogs: using compiler-inserted releases to manage physical memory intelligently. In: Proceedings of the 4th Symposium on Operating Systems Design and Implementation (OSDI-00), pp. 31–44 (2000)

  9. Chang, F., Itzkovitz, A., Karamcheti, V.: User-level resource constrained sandboxing. In: Proc. of the 4th USENIX Windows Systems Symposium, Seattle, WA, pp. 25–36, August 2000

  10. Chiang, S., Vernon, M.: Characteristics of a large shared memory production workload. In: Proc. 7th Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP 2001). Lecture Notes in Computer Science, vol. 2221, pp. 159–187, Cambridge, MA, June

  11. Dachsel, H., Nieplocha, J., Harrison, R.: An out-of-core implementation of the COLUMBUS massively-parallel multireference configuration interaction program. In: Proceedings of Supercomputing ’98 (1998)

  12. Dail, H., Casanova, H., Berman, F.: A decoupled scheduling approach for the GrADS Program Development Environment. In: Proc. of the IEEE/ACM Supercomputing’02: High Performance Networking and Computing Conference (SC’02), Baltimore, MD, November 2002

  13. Feeley, M., Morgan, W., Pighin, F., Karlin, A., Levy, H., Thekkat, C.: Implementing global memory management in a workstation cluster. In: 15th ACM Symposium on Operating Systems Principles (SOSP-15), 201–212 (1995)

  14. Feitelson, D., Rudolph, L.: Evaluation of design choices for gang scheduling using distributed hierarchical control. J. Parallel Distrib. Comput. 35(1), 18–34 (May, 1996)

    Article  MATH  Google Scholar 

  15. Flouris, M., Markatos, E.: Network RAM. In: High Performance Cluster Computing, pp. 383–408. Prentice Hall, Englewood Cliffs, NJ (1999)

    Google Scholar 

  16. Frey, J., Tannenbaum, T., Livny, M., Foster, I., Tuecke, S.: Condor-G: a computation management agent for multi-institutional Grids. In: Proc. of the 10th IEEE International Symposium on High Performance Distributed Computing (HPDC-10), San Francisco, California, pp. 55–63, August 2001

  17. Gould, H., Tobochnik, J.: An introduction to computer simulation methods: applications to physical systems. Addison-Wesley, Reading, MA (1996)

    Google Scholar 

  18. Henderson, R.: Job scheduling under the portable batch system. In: Proc. of the First Workshop on Job Scheduling Strategies for Parallel Processingolph. Lecture Notes in Computer Science, vol. 949, Santa Barbara, CA, pp. 279–294, April 1995

  19. Iftode, L.: Home-Based Shared Virtual Memory. PhD thesis, Princeton University, June 1998

  20. Iftode, L, Petersen, K., Li, K.: Memory servers for multicomputers. In: Proc. of the IEEE 1993 Spring Conference on Computers and Communications (COMPCON’93), pp. 538–547, February 1993

  21. Koussih, S., Acharya, A., Setia, S.: Dodo: a user-level system for exploiting idle memory in workstation clusters. In: HPDC (1999)

  22. Lewis, M., Gerner, L.: Maui Scheduler, an advanced system software tool. In: Proc. of the ACM/IEEE Supercomputing’97: High Performance Networking and Computing Conference (SC’97), San Jose, CA, November 1997

  23. Markatos, E.P., Dramitinos, G.: Implementation of a reliable remote memory pager. In: USENIX Annual Technical Conference, 177–190 (1996)

  24. Mills, R.T.: Dynamic adaptation to CPU and memory load in scientific applications. PhD thesis, Department of Computer Science, College of William and Mary, Fall (2004)

  25. Mills, R.T., Stathopoulos, A., Nikolopoulos, D.S.: Adapting to memory pressure from within scientific applications on multiprogrammed COWs. In: International Parallel and Distributed Processing Symposium (IPDPS 2004), Santa Fe, NM, USA (2004)

  26. Mills, R.T., Stathopoulos, A., Smirni, E.: Algorithmic modifications to the Jacobi–Davidson parallel eigensolver to dynamically balance external CPU and memory load. In: 2001 International Conference on Supercomputing, pp. 454–463. ACM Press, New York (2001)

    Chapter  Google Scholar 

  27. Narravula, S., Jin, H., Vaidyanathan, K., Panda, D.: Designing efficient cooperative caching schemes for multi-tier data centers over RDMA-enabled networks. In: Proc. of the 6th IEEE/ACM International Symposium on Cluster Computing and the Grid, Singapore, pp. 401–408, May 2006

  28. Nieplocha, J., Krishnan, M., Palmer, B., Tipparaju, V., Zhang, Y.: Exploiting processor groups to extend scalability of the GA shared memory programming model. In: ACM Computing Frontiers, Italy, 2005 (2005)

  29. Nikolopoulos, D.: Malleable memory mapping: user-level control of memory bounds for effective program adaptation. In: Proc. of the 17th IEEE/ACM International Parallel and Distributed Processing Symposin (IPDPS 2003), Nice, France, April 2003

  30. Nikolopoulos, D., Polychronopoulos, C.: Adaptive scheduling under memory pressure on multiprogrammed clusters. In: Proc. of the 2nd IEEE/ACM International Conference on Cluster Computing and the Grid (ccGrid’02), Berlin, Germany, pp. 22–29, May 2002

  31. Oleszkiewicz, J., Xiao, L., Liu, Y.: Parallel network RAM: effectively utilizing global cluster memory for large data-intensive parallel programs. In: 2004 International Conference on Parallel Processing (ICPP’2004), 353–360 (2004)

  32. Pang, H., Carey, M.J., Livny, M.: Memory-adaptive external sorting. In: Agrawal R., Baker S., Bell D. A. (eds.) 19th International Conference on Very Large Data Bases, August 24-27, 1993, Dublin, Ireland, Proceedings, pp. 618–629. Morgan Kaufmann, San Mateo, CA (1993)

    Google Scholar 

  33. Petrini, F., Feng, W.: Time-sharing parallel jobs in the presence of multiple resource requirements. In: Proc. of the 6th Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP 2000), in conjunction with IEEE IPDPS’2000, LNCS Vol. 1911, Cancun, Mexico, pp. 113–136, May 2000

  34. Plank, J., Li, K., Puening, M.: Diskless checkpointing. IEEE Trans. Parallel Distrib. Syst. 9(10) pp. 972–986, October 1998

    Article  Google Scholar 

  35. Daugherty, R., Ferber, D.: Network queuing environment. In: Proceedings of the Spring Cray Users Group Conference (CUG’94), San Diego, CA, pp. 203–205, March 1994

  36. Saad, Y.: SPARSKIT: a basic toolkit for sparse matrix computations. Technical Report 90-20, Research Institute for Advanced Computer Science, NASA Ames Research Center, Moffet Field, CA, 1990. Software currently available at ftp://ftp.cs.umn.edu/dept/sparse/.

  37. Sobalvarro, P., Pakin, S., Weihl, W., Chien, A.: Dynamic coscheduling on workstation clusters. In: Proc. of the 4th Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP’98). Lecture Notes in Computer Science, vol. 1459, pp. 231–256, Orlando, FL, April 1998

  38. Stathopoulos, A., Öğüt, S., Saad, Y., Chelikowsky, J.R., Kim, H.: Parallel methods and tools for predicting material properties. Comput. Sci. Eng. 2(4), 19–32 (2000)

    Article  Google Scholar 

  39. Vadhiyar, S., Dongarra, J.: A Performance Oriented Migration Framework for the Grid. Technical Report, Innovative Computing Laboratory, University of Tennessee, Knoxville (2002)

  40. Voelker, G., Anderson, E., Kimbrel, T., Feeley, M., Chase, J., Karlin, A., Levy, H.: Implementing Cooperative Prefetching and Caching in a Globally-Managed Memory System. Madison, WI, pp. 33–43, June 1999

  41. Woodward, P., Anderson, S., Porter, D., Iyer, A.: Distributed Computing in the SHMOD Framework on the NSF TeraGrid. Technical report, Laboratory for Computational Science and Engineering, University of Minnesota, February 2004

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard T. Mills.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mills, R.T., Yue, C., Stathopoulos, A. et al. Runtime and Programming Support for Memory Adaptation in Scientific Applications via Local Disk and Remote Memory. J Grid Computing 5, 213–234 (2007). https://doi.org/10.1007/s10723-007-9075-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10723-007-9075-7

Keywords

Navigation