Skip to main content

Principles of Shared Memory Parallel Programming Using ParC

  • Chapter
Multicore Programming Using the ParC Language

Part of the book series: Undergraduate Topics in Computer Science ((UTICS))

  • 1414 Accesses

Abstract

This chapter introduces the basic concepts of parallel programming. It is based on the ParC language, which is an extension of the C programming language with block-oriented parallel constructs that allow the programmer to express fine-grain parallelism in a shared memory model. It can be used to express parallel algorithms, and it is also conducive for the parallelization of sequential C programs. The chapter covers several topics in shared memory programming. Each topic is presented with simple examples demonstrating its utility. The chapter supplies the basic tools and concepts needed to write parallel programs and covers these topics:

  • Practical aspects of threads, the sequential “atoms” of parallel programs.

  • Closed constructs to create parallelism.

  • Possible bugs.

  • The structure of the software environment that surrounds parallel programs.

  • The extension of C scoping rules to support private variables and local memory accesses.

  • The semantics of parallelism.

  • The discrepancy between the limited number of physical processors and the much larger number of parallel threads used in a program.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In the sequel we will also use the short notation of PF i=1…n [S i ] for the parfor construct and PB [S 1,…,S k ] for the parblock construct.

References

  • Alon, N., Spencer, J.H.: The Probabilistic Method. Wiley, New York (1992)

    MATH  Google Scholar 

  • Awerbuch, B.: Optimal distributed algorithms for minimum weight spanning tree, counting, leader election, and related problems. In: Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, pp. 230–240. ACM, New York (1987)

    Google Scholar 

  • Bader, D.A., Cong, G.: Fast shared-memory algorithms for computing the minimum spanning forest of sparse graphs. J. Parallel Distrib. Comput. 66(11), 1366–1378 (2006)

    Article  MATH  Google Scholar 

  • Ben-Asher, Y., Haber, G.: On the usage of simulators to detect inefficiency of parallel programs caused by bad schedulings: the simparc approach. In: HiPC (High Performance Computing), New Delhi, India (1995)

    Google Scholar 

  • Blumofe, R.D., Joerg, C.F., Kuszmaul, B.C., Leiserson, C.E., Randall, K.H., Zhou, Y.: Cilk: An efficient multithreaded runtime system. In: Proceedings of the Fifth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 207–216. ACM, New York (1995)

    Chapter  Google Scholar 

  • Chazelle, B.: A minimum spanning tree algorithm with inverse-Ackermann type complexity. J. ACM 47(6), 1028–1047 (2000)

    MathSciNet  MATH  Google Scholar 

  • Chong, K.W., Han, Y., Lam, T.W.: Concurrent threads and optimal parallel minimum spanning trees algorithm. J. ACM 48(2), 297–323 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Cormen, T.H.: Introduction to Algorithms. The MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  • Dagum, L., Menon, R.: OpenMP: an industry standard API for shared-memory programming. IEEE Comput. Sci. Eng. 5(1), 46–55 (2002)

    Article  Google Scholar 

  • El-Ghazawi, T., Carlson, W.: UPC: Distributed Shared Memory Programming. Wiley-Interscience, New York (2005)

    Book  Google Scholar 

  • Gallager, R.G., Humblet, P.A., Spira, P.M.: A distributed algorithm for minimum-weight spanning trees. ACM Trans. Program. Lang. Syst. 5(1), 66–77 (1983)

    Article  MATH  Google Scholar 

  • Garay, J.A., Kutten, S., Peleg, D.: A sub-linear time distributed algorithm for minimum-weight spanning trees. In: Proceedings of 34th Annual Symposium on Foundations of Computer Science, 1993, pp. 659–668. IEEE, New York (2002)

    Google Scholar 

  • Hauck, E.A., Dent, B.A.: Burroughs’ B6500/B7500 stack mechanism. In: AFIPS Spring Joint Comput. Conf., vol. 32, pp. 245–251 (1968)

    Google Scholar 

  • Herlihy, M., Moss, J.E.B.: Transactional memory: architectural support for lock-free data structures. In: Proceedings of the 20th Annual International Symposium on Computer Architecture, p. 300. ACM, New York (1993)

    Google Scholar 

  • Hillis, W.D., Steele, G.L. Jr.: Data parallel algorithms. Commun. ACM 29(12), 1170–1183 (1986)

    Article  Google Scholar 

  • Hummel, S.F., Schonberg, E.: Low-overhead scheduling of nested parallelism. IBM J. Res. Dev. 35(5/6), 743–765 (1991)

    Article  Google Scholar 

  • JáJá, J.: An Introduction to Parallel Algorithms. Addison Wesley Longman, Redwood City (1992)

    MATH  Google Scholar 

  • Karger, D.R., Klein, P.N., Tarjan, R.E.: A randomized linear-time algorithm to find minimum spanning trees. J. ACM 42(2), 321–328 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  • Koelbel, C.H.: The High Performance Fortran Handbook. The MIT Press, Cambridge (1994)

    Google Scholar 

  • Ladner, R.E., Fischer, M.J.: Parallel prefix computation. J. ACM 27(4), 831–838 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  • Marathe, V.J., Scott, M.L.: A qualitative survey of modern software transactional memory systems. Tech. Rep., University of Rochester Computer Science Dept. (2004)

    Google Scholar 

  • Merrow, T., Henson, N.: System design for parallel computing. High Perform. Syst. 10(1), 36–44 (1989)

    Google Scholar 

  • Nesetril, J., Milková, E., Nesetrilová, H.: Otakar Boruvka on Minimum Spanning Tree Problem: Translation of Both the 1926 Papers, Comments. History. DMATH: Discrete Mathematics, vol. 233 (2001)

    Google Scholar 

  • Peleg, D., Rubinovich, V.: A near-tight lower bound on the time complexity of distributed MST construction. In: 40th Annual Symposium on foundations of Computer Science, 1999, pp. 253–261. IEEE, New York (2002)

    Google Scholar 

  • Pettie, S., Ramachandran, V.: A randomized time-work optimal parallel algorithm for finding a minimum spanning forest. In: Randomization, Approximation, and Combinatorial Optimization. Algorithms and Techniques, pp. 233–244 (2004)

    Google Scholar 

  • Reif, J.H.: Synthesis of Parallel Algorithms. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

  • Scott, M.L.: Programming Language Pragmatics, 3rd edn. Morgan Kaufmann, San Mateo (2009)

    Google Scholar 

  • Sethi, R.: Programming Languages: Concepts & Constructs, 2nd edn. Pearson Education India, New Delhi (1996)

    MATH  Google Scholar 

  • Shaibe, B.: Performance of cache memory in shared-bus multiprocessor architectures: an experimental study of conventional and multi-level designs. Master’s thesis, Institute of Computer Science, The Hebrew University, Jerusalem (1989)

    Google Scholar 

  • Shiloach, Y., Vishkin, U.: Finding the maximum, merging and sorting in a parallel computational model. J. Algorithms 2(1), 88–102 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  • Snir, M.: Depth-size trade-offs for parallel prefix computation. J. Algorithms 7(2), 185–201 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  • Sollin, M.: Le trace de canalisation. In: Programming, Games, and Transportation Networks (1965)

    Google Scholar 

  • Valiant, L.G.: Parallelism in comparison problems. SIAM J. Comput. 4(3), 348–355 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  • Vrsalovic, D., Segall, Z., Siewiorek, D., Gregoretti, F., Caplan, E., Fineman, C., Kravitz, S., Lehr, T., Russinovich, M.: MPC—multiprocessor C language for consistent abstract shared data type paradigms. In: Ann. Hawaii Intl. Conf. System Sciences, vol. I, pp. 171–180 (1989)

    Google Scholar 

  • Weihl, W.E.: Linguistic support for atomic data types. ACM Trans. Program. Lang. Syst. 12(2), 178–202 (1990)

    Article  Google Scholar 

  • Yelick, K., Semenzato, L., Pike, G., Miyamoto, C., Liblit, B., Krishnamurthy, A., Hilfinger, P., Graham, S., Gay, D., Colella, P., et al.: Titanium: A high-performance Java dialect. Concurrency 10(11–13), 825–836 (1998)

    Article  Google Scholar 

  • Zernik, D., Rudolph, L.: Animating work and time for debugging parallel programs—foundation and experience. In: ACM ONR Workshop on Parallel and Distributed Debugging, pp. 46–56 (1991)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yosi Ben-Asher .

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag London

About this chapter

Cite this chapter

Ben-Asher, Y. (2012). Principles of Shared Memory Parallel Programming Using ParC . In: Multicore Programming Using the ParC Language. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-2164-0_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-2164-0_2

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-2163-3

  • Online ISBN: 978-1-4471-2164-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics