Analysis of the Memory Registration Process in the Mellanox InfiniBand Software Stack

  • Frank Mietke
  • Robert Rex
  • Robert Baumgartl
  • Torsten Mehlan
  • Torsten Hoefler
  • Wolfgang Rehm
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4128)


To leverage high speed interconnects like InfiniBand it is important to minimize the communication overhead. The most interfering overhead is the registration of communication memory.

In this paper, we present our analysis of the memory registration process inside the Mellanox InfiniBand driver and possible ways out of this bottleneck. We evaluate and characterize the most time consuming parts in the execution path of the memory registration function using the Read Time Stamp Counter (RDTSC) instruction. We present measurements on AMD Opteron and Intel Xeon systems with different types of Host Channel Adapters for PCI-X and PCI-Express. Finally, we conclude with first results using Linux hugepage support to shorten the time of registering a memory region.


Registration Time Memory Area Page Size Memory Region Page Fault 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Archives, L.K.: Website
  2. 2.
    Bell, C., Bonachea, D.: A New DMA Registration Strategy for Pinning-Based High Performance Networks. In: Proceedings of Int’l Parallel and Distributed Processing Symposium (IPDPS 2003) (April 2003)Google Scholar
  3. 3.
    Grabner, R., Mietke, F., Rehm, W.: Implementing an MPICH-2 Channel Device over VAPI on InfiniBand. In: Proceedings of the 18th Int’l Parallel and Distributed Processing Symposium, IPDPS (2004)Google Scholar
  4. 4.
    InfiniBand Trade Association. InfiniBand Architecture Specification 1.2 (2004)Google Scholar
  5. 5.
    Intel GmbH, Hermlheimer Str. 8a, D-50321 Brhl, Germany. Intel MPI Benchmarks – Users Guide and Methodology DescriptionGoogle Scholar
  6. 6.
    Liss, L., Birk, Y., Schuster, A.: In-Kernel Integration of Operating System and Infiniband Functions for High Performance Computing Clusters: A DSM Example. IEEE Transactions on Parallel and Distributed Systems 16(9) (September 2005)Google Scholar
  7. 7.
    Liu, J., Jiang, W., Wyckoff, P., Panda, D.K., Ashton, D., Buntinas, D., Gropp, W., Toonen, B.: Design and Implementation of MPICH2 over InfiniBand with RDMA Support. In: Proceedings of Int’l Parallel and Distributed Processing Symposium (IPDPS 2004) (April 2004)Google Scholar
  8. 8.
    Liu, J., Wu, J., Kini, S.P., Wyckoff, P., Panda, D.K.: High Performance RDMA-Based MPI Implementation over InfiniBand. In: The Proceedings of 17th Annual ACM International Conference on Supercomputing (June 2003)Google Scholar
  9. 9.
    Mehlan, T., Rehm, W., Engler, R., Wenzel, T.: Providing a High-Performance VIA-Module for LAM/MPI. In: Proceedings of IEEE International Conference on Parallel Computing in Electrical Engineering (PARELEC 2004) (September 2004)Google Scholar
  10. 10.
    Mietke, F., Rex, R., Mehlan, T., Hoefler, T., Rehm, W.: Reducing the Impact of Memory Registration in InfiniBand. In: Proceedings of the 1. Workshop Kommunikation in Clusterrechnern und Clusterverbundsystemen (KiCC) (2005)Google Scholar
  11. 11.
    Myrinet. Myrinet Inc.
  12. 12.
    Rex, R.: Analysis and Evaluation of Memory Locking Operations for High-Speed Network Interconnects. Student Project, Chemnitz University of Technology (October 2005)Google Scholar
  13. 13.
    Sur, S., Bondhugula, U., Mamidala, A., Jin, H.-W., Panda, D.K.: High Performance RDMA Based All-to-all Broadcast for InfiniBand Clusters. In: Bader, D.A., Parashar, M., Sridhar, V., Prasanna, V.K. (eds.) HiPC 2005. LNCS, vol. 3769, Springer, Heidelberg (2005)CrossRefGoogle Scholar
  14. 14.
    Tezuka, H., O’Carroll, F., Hori, A., Ishikawa, Y.: Pin-down Cache: A Virtual Memory Management Technique for Zero-copy Communication. In: Proceedings of 12th Int. Parallel Processing Symposium (March 1998)Google Scholar
  15. 15.
    Tipparaju, V., Santhanaraman, G., Nieplocha, J., Panda, D.K.: Host-Assisted Zero-Copy Remote Memory Access Communication on InfiniBand. In: Proceedings of Int’l Parallel and Distributed Processing Symposium (IPDPS 2004) (April 2004)Google Scholar
  16. 16.
    O. M. Website, A High Performance Message Passing Library
  17. 17.
    Wu, J., Wyckoff, P., Panda, D.K.: PVFS over InfiniBand: Design and Performance Evaluation. In: Proceedings of International Conference on Parallel Processing (ICPP 2003) (October 2003)Google Scholar
  18. 18.
    Wu, J., Wyckoff, P., Panda, D.K.: Supporting Efficient Noncontiguous Access in PVFS over InfiniBand. In: Proceedings of IEEE International Conference on Cluster Computing (Cluster 2003) (December 2003)Google Scholar
  19. 19.
    Wu, J., Wyckoff, P., Panda, D.K., Ross, R.: Unifier: Unifying Cache Management and Communication Buffer Management for PVFS over InfiniBand. In: Proceedings of IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGrid 2004) (April 2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Frank Mietke
    • 1
  • Robert Rex
    • 1
  • Robert Baumgartl
    • 1
  • Torsten Mehlan
    • 1
  • Torsten Hoefler
    • 1
  • Wolfgang Rehm
    • 1
  1. 1.Department of Computer ScienceChemnitz University of TechnologyGermany

Personalised recommendations