Advertisement

High-Performance Graph Algorithms from Parallel Sparse Matrices

  • John R. Gilbert
  • Steve Reinhardt
  • Viral B. Shah
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4699)

Abstract

Large-scale computation on graphs and other discrete structures is becoming increasingly important in many applications, including computational biology, web search, and knowledge discovery. High-performance combinatorial computing is an infant field, in sharp contrast with numerical scientific computing.

We argue that many of the tools of high-performance numerical computing – in particular, parallel algorithms and data structures for computation with sparse matrices – can form the nucleus of a robust infrastructure for parallel computing on graphs. We demonstrate this with an implementation of a graph analysis benchmark using the sparse matrix infrastructure in Star-P, our parallel dialect of the Matlab programming language.

Keywords

Sparse Matrix Input Graph Large Graph Sparse Matrice Basic Design Principle 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bader, D.A., Madduri, K., Gilbert, J.R., Shah, V., Kepner, J., Meuse, T., Krishnamurthy, A.: Designing scalable synthetic compact applications for benchmarking high productivity computing systems. Cyberinfrastructure Technology Watch, 2(4B) (November 2006)Google Scholar
  2. 2.
    Bailey, D.H., Barszcz, E., Barton, J.T., Browning, D.S., Carter, R.L., Dagum, D., Fatoohi, R.A., Frederickson, P.O., Lasinski, T.A., Schreiber, R.S., Simon, H.D., Venkatakrishnan, V., Weeratunga, S.K.: The NAS parallel benchmarks. The International Journal of Supercomputer Applications 5(3), 63–73 (1991)CrossRefGoogle Scholar
  3. 3.
    DeLano, W.L.: The PyMOL molecular graphics system, DeLano Scientific LLC, San Carlos, CA, USA (2006), http://www.pymol.org/
  4. 4.
    Dongarra, J.J.: Performance of various computers using standard linear equations software in a Fortran environment. In: Karplus, W.J. (ed.) Multiprocessors and array processors: proceedings of the Third Conference on Multiprocessors and Array Processors, San Diego, CA, USA, January 14–16, 1987. pp. 15–32, Society for Computer Simulation (1987)Google Scholar
  5. 5.
    Gilbert, J.R., Moler, C., Schreiber, R.: Sparse matrices in MATLAB: Design and implementation. SIAM J. on Matrix Anal. Appl. 13(1), 333–356 (1992)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Husbands, P., Isbell, C.: MATLAB*P: A tool for interactive supercomputing. In: SIAM Conference on Parallel Processing for Scientific Computing  (1999)Google Scholar
  7. 7.
    Luby, M.: A simple parallel algorithm for the maximal independent set problem. SIAM J. Comput. 15(4), 1036–1053 (1986)zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Moler, C.B.: Parallel matlab. In: Householder Symposium on Numerical Algebra (2005)Google Scholar
  9. 9.
    Robertson, C.: Sparse parallel matrix multiplication. M.S. Project, Department of Computer Science, UCSB (2005)Google Scholar
  10. 10.
    Shah, V., Gilbert, J.R.: Sparse matrices in Matlab*P: Design and implementation. In: Bougé, L., Prasanna, V.K. (eds.) HiPC 2004. LNCS, vol. 3296, pp. 144–155. Springer, Heidelberg (2004)Google Scholar
  11. 11.
    Travinin, N., Kepner, J.: pMatlab parallel matlab library. International Journal of High Performance Computing Applications 2006 (submitted)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • John R. Gilbert
    • 1
  • Steve Reinhardt
    • 2
  • Viral B. Shah
    • 1
  1. 1.University of California, Dept. of Computer Science, Harold Frank Hall, Santa Barbara, CA 93106USA
  2. 2.Silicon Graphics Inc. 

Personalised recommendations