Distributed Anemone: Transparent Low-Latency Access to Remote Memory

  • Michael R. Hines
  • Jian Wang
  • Kartik Gopalan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4297)


Performance of large memory applications degrades rapidly once the system hits the physical memory limit and starts paging to local disk. We present the design, implementation and evaluation of Distributed Anemone (Adaptive Network Memory Engine) – a lightweight and distributed system that pools together the collective memory resources of multiple machines across a gigabit Ethernet LAN. Anemone treats remote memory as another level in the memory hierarchy between very fast local memory and very slow local disks. Anemone enables applications to access potentially “unlimited” network memory without any application or operating system modifications. Our kernel-level prototype features fully distributed resource management, low-latency paging, resource discovery, load balancing, soft-state refresh, and support for ’jumbo’ Ethernet frames. Anemone achieves low page-fault latencies of 160μs average, application speedups of up to 4 times for single process and up to 14 times for multiple concurrent processes, when compared against disk-based paging.


Resource Discovery Remote Memory Local Disk Distribute Shared Memory Paging Request 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Comer, D., Griffoen, J.: A new design for distributed systems: the remote memory model. In: Proc. of the USENIX 1991 Summer Technical Conference, pp. 127–135 (1991)Google Scholar
  2. 2.
    Felten, E., Zahorjan, J.: Issues in the implementation of a remote paging system. Tech. Report 91-03-09, Comp. Science Dept., University of Washington (1991)Google Scholar
  3. 3.
    Feeley, M., Morgan, W., Pighin, F., Karlin, A., Levy, H.: Implementing global memory management in a workstation cluster. Operating Systems Review. In: 15th ACM Symposium on Operating Systems Principles, vol. 29(5), pp. 201–212 (1995)Google Scholar
  4. 4.
    Koussih, S., Acharya, A., Setia, S.: Dodo: A user-level system for exploiting idle memory in workstation clusters. In: Proc. of the Eighth IEEE Intl. Symp. on High Performance Distributed Computing (HPDC-8) (1999)Google Scholar
  5. 5.
    Markatos, E., Dramitinos, G.: Implementation of a reliable remote memory pager. In: USENIX Annual Technical Conference, pp. 177–190 (1996)Google Scholar
  6. 6.
    Flouris, M., Markatos, E.: The network RamDisk: Using remote memory on heterogeneous NOWs. Cluster Computing 2(4), 281–293 (1999)CrossRefGoogle Scholar
  7. 7.
    McDonald, I.: Remote paging in a single address space operating system supporting quality of service. Tech. Report, Dept. of Comp. Science, Univ. of Glasgow (1999)Google Scholar
  8. 8.
    Stark, E.: SAMSON: A scalable active memory server on a network (August 2003)Google Scholar
  9. 9.
    Dwarkadas, S., Hardavellas, N., Kontothanassis, L., Nikhil, R., Stets, R.: Cashmere-VLM: Remote memory paging for software distributed shared memory. In: Proc. of Intl. Parallel Processing Symposium, San Juan, Puerto Rico, pp. 153–159 (April 1999)Google Scholar
  10. 10.
    Amza, C., et al.: Treadmarks: Shared memory computing on networks of workstations. IEEE Computer 29(2), 18–28 (1996)Google Scholar
  11. 11.
    Hines, M., Lewandowski, M., Gopalan, K.: Implementation experiences in transparently harnessing cluster-wide memory. In: Proc. of the International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS), Calgary, Canada (August 2006)Google Scholar
  12. 12.
    Gopalan, K., Chiueh, T.: Delay budget partitioning to maximize network resource usage efficiency. In: Proc. IEEE INFOCOM 2004, Hong Kong, China (March 2004)Google Scholar
  13. 13.
    Garcia-Molina, H., Lipton, R., Valdes, J.: A massive memory machine. IEEE Transactions on Computers C-33(5), 391–399 (1984)CrossRefGoogle Scholar
  14. 14.
    Bohannon, P., Rastogi, R., Silberschatz, A., Sudarshan, S.: The architecture of the Dali main memory storage manager. Bell Labs Technical Journal 2(1), 36–47 (1997)CrossRefGoogle Scholar
  15. 15.
    Acharya, A., Setia, S.: Availability and utility of idle memory in workstation clusters. Measurement and Modeling of Computer Systems, 35–46 (1999)Google Scholar
  16. 16.
    Anderson, T., Culler, D., Patterson, D.: A case for NOW (Networks of Workstations). IEEE Micro 15(1), 54–64 (1995)CrossRefGoogle Scholar
  17. 17.
    Wong, T., Wilkes, J.: My cache or yours? Making storage more exclusive. In: Proc. of the USENIX Annual Technical Conference, pp. 161–175 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Michael R. Hines
    • 1
  • Jian Wang
    • 1
  • Kartik Gopalan
    • 1
  1. 1.Computer Science DepartmentBinghamton University 

Personalised recommendations