Capsules: Expressing Composable Computations in a Parallel Programming Model

  • Hasnain A. Mandviwala
  • Umakishore Ramachandran
  • Kathleen Knobe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5234)

Abstract

A well-known problem in designing high-level parallel programming models and languages is the “granularity problem”, where the execution of parallel task instances that are too fine-grain incur large overheads in the parallel run-time and decrease the speed-up achieved by parallel execution. On the other hand, tasks that are too coarse-grain create load-imbalance and do not adequately utilize the parallel machine. In this work we attempt to address this issue with a concept of expressing “composable computations” in a parallel programming model called “Capsules”. Such composability allows adjustment of execution granularity at run-time.

In Capsules, we provide a unifying framework that allows composition and adjustment of granularity for both data and computation over iteration space and computation space. We show that this concept not only allows the user to express the decision on granularity of execution, but also the decision on the granularity of garbage collection, and other features that may be supported by the programming model.

We argue that this adaptability of execution granularity leads to efficient parallel execution by matching the available application concurrency to the available hardware concurrency, thereby reducing parallelization overhead. By matching, we refer to creating coarse-grain Computation Capsules, that encompass multiple instances of fine-grain computation instances. In effect, creating coarse-grain computations reduces overhead by simply reducing the number of parallel computations. This leads to: (1) Reduced synchronization cost such as for blocked searches in shared data-structures; (2) Reduced distribution and scheduling cost for parallel computation instances; and (3) Reduced book-keeping cost maintain data-structures such as for unfulfilled data requests.

Capsules builds on our prior work, TStreams, a data-flow oriented parallel programming framework. Our results on an SMP machine using the Cascade Face Detector, and the Stereo Vision Depth applications show that adjusting execution granularity through profiling helps determine optimal coarse-grain serial execution granularity, reduces parallelization overhead and yields maximum application performance.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Asanovic, K., Bodik, R., Catanzaro, B.C., Gebis, J.J., Husbands, P., Keutzer, K., Patterson, D.A., Plishker, W.L., Shalf, J., Williams, S.W., Yelick, K.A.: The Landscape of Parallel Computing Research: A View from Berkeley. Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley (December 2006)Google Scholar
  2. 2.
    Blumofe, R.D., Joerg, C.F., Kuszmaul, B.C., Leiserson, C.E., Randall, K.H., Zhou, Y.: Cilk: An Efficient Multithreaded Runtime System. In: PPOPP 1995: Proceedings of the Fifth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 207–216. ACM Press, New York (1995)CrossRefGoogle Scholar
  3. 3.
    Board, O.A.R.: OpenMP: Simple, Portable, Scalable SMP Programming (2006)Google Scholar
  4. 4.
    Carter, L., Ferrante, J., Hummel, S.F., Alpern, B., Gatlin, K.-S.: Hierarchical Tiling: A Methodology for High Performance. Technical Report CS-96-508, University of California at San Diego, San Diego, CA (1996)Google Scholar
  5. 5.
    Gelernter, D.: Generative communication in Linda. ACM Transactions on Programming Languages and Systems 7(1), 80–112 (1985)MATHCrossRefGoogle Scholar
  6. 6.
    Intel. C++ Compiler 9.1 for LinuxGoogle Scholar
  7. 7.
    Knobe, K., Offner, K.: TStreams: How to Write a Parallel Program. Technical Report HPL-2004-193, Hewlet Packard Labs - Cambridge Research Laboratory, Cambridge, MA (2004)Google Scholar
  8. 8.
    Kusano, K., Satoh, S., Sato, M.: In: Valero, M., Joe, K., Kitsuregawa, M., Tanaka, H. (eds.) ISHPC 2000. LNCS, vol. 1940, p. 403. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  9. 9.
    Lam, M.S., Rinard, M.C.: Coarse-grain parallel programming in Jade. In: PPOPP 1991: Proceedings of the third ACM SIGPLAN symposium on Principles and practice of parallel programming, pp. 94–105. ACM Press, New York (1991)CrossRefGoogle Scholar
  10. 10.
    Levon, J.: OProfile, a system-wide profiler for Linux systemsGoogle Scholar
  11. 11.
    Nikhil, R.S., Ramachandran, U., Rehg, J.M., Halstead Jr., R.H., Joerg, C.F., Kontothanassis, L.: Stampede: A programming system for emerging scalable interactive multimedia applications. In: Carter, L., Ferrante, J., Sehr, D., Chatterjee, S., Prins, J.F., Li, Z., Yew, P.-C. (eds.) LCPC 1998. LNCS, vol. 1656. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  12. 12.
    Offner, C., Knobe, K.: Weak Dynamic Single Assignment Form. Technical Report HPL-2003-169R1, Hewlet Packard Labs - Cambridge Research Laboratory, Cambridge, MA (2003)Google Scholar
  13. 13.
    Ramachandran, U., Nikhil, R., Rehg, J.M., Angelov, Y., Adhikari, S., Mackenzie, K., Harel, N., Knobe, K.: Stampede: A Cluster Programming Middleware for Interactive Stream-oriented Applications. IEEE Transactions on Parallel and Distributed Systems (2003)Google Scholar
  14. 14.
    Ramachandran, U., Nikhil, R.S., Harel, N., Rehg, J.M., Knobe, K.: Space-Time Memory: A Parallel Programming Abstraction for Interactive Multimedia Applications. In: Proc. Principles and Practice of Parallel Programming (PPoPP 1999), Atlanta, GA (May 1999)Google Scholar
  15. 15.
    Rehg, J.M., Knobe, K., Ramachandran, U., Nikhil, R.S., Chauhan, A.: Integrated Task and Data Parallel Support for Dynamic Applications. Scientific Programming 7(3-4), 289–302 (1999); Invited paper selected from 1998 Workshop on Languages, Compilers, and Run-Time SystemsGoogle Scholar
  16. 16.
    Rinard, M.C., Scales, D.J., Lam, M.S.: Heterogeneous Parallel Programming in Jade. In: Supercomputing 1992: Proceedings of the 1992 ACM/IEEE conference on Supercomputing, pp. 245–256. IEEE Computer Society Press, Los Alamitos (1992)Google Scholar
  17. 17.
    Rinard, M.C., Scales, D.J., Lam, M.S.: Jade: A High-Level, Machine-Independent Language for Parallel Programming. Computer 26(6), 28–38 (1993)CrossRefGoogle Scholar
  18. 18.
    Sutter, H., Larus, J.: Software and the Concurrency Revolution. Queue 3(7), 54–62 (2005)CrossRefGoogle Scholar
  19. 19.
    Viola, P., Jones, M.: Rapid Object Detection using a Boosted Cascade of Simple Features. CVPR 01, 511 (2001)Google Scholar
  20. 20.
    Yang, R., Pollefeys, M.: A Versatile Stereo Implementation on Commodity Graphics Hardware. Journal of Real-Time Imaging 11, 7–18 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Hasnain A. Mandviwala
    • 1
  • Umakishore Ramachandran
    • 1
  • Kathleen Knobe
    • 2
  1. 1.College of ComputingGeorgia Institute of Technology 
  2. 2.Intel Corporation Inc. 

Personalised recommendations