Abstract
A wave of parallel processing research in the 1970s and 1980s developed various techniques for concurrent task scheduling, including work-stealing scheduling and lazy task creation, and various ideas for supporting speculative computing, including the sponsor model, but these ideas did not see large-scale use as long as uniprocessor clock speeds continued to increase rapidly from year to year. Now that the increase in clock speeds has slowed dramatically and multicore processors have become the answer for increasing the computing throughput of processor chips, increasing the performance of everyday applications on multicore processors by using parallelism has taken on greater importance, so concurrent task scheduling techniques are getting a second look.
Work stealing and lazy task creation have now been incorporated into a wide range of systems capable of “industrial strength” application execution, but support for speculative computing still lags behind. This paper traces these techniques from their origins to their use in present-day systems and suggests some directions for further investigation and development in the speculative computing area.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Agarwal, A., Bianchini, R., Chaiken, D., et al.: The MIT Alewife Machine: Architecture and Performance. In: 22nd Annual Int’l. Symp. on Computer Architecture, pp. 2–13 (1995)
Agrawal, K., Leiserson, C., He, Y., Hsu, W.: Adaptive Work Stealing with Parallelism Feedback. ACM Transactions on Computer Systems 26(3), 7:1–7:32 (2008)
Allen, E., Chase, D., Hallett, J., et al.: The Fortress Language Specification Version 1.0 (2008), http://research.sun.com/projects/plrg/fortress.pdf
Ayguade, E., Copty, N., Duran, A., et al.: The Design of OpenMP Tasks. IEEE Trans. on Parallel and Distributed Systems 20(3), 404–418 (2009)
Baker, H., Hewitt, C.: The Incremental Garbage Collection of Processes. MIT Artificial Intelligence Laboratory Memo 454, Cambridge, MA (1977)
Blumofe, R., Joerg, C., Kuszmaul, B., et al.: Cilk: An Efficient Multithreaded Runtime System. J. Parallel and Distributed Computing 37(1), 55–69 (1996)
Blumofe, R., Leiserson, C.: Scheduling Multithreaded Computations by Work Stealing. J. ACM 46(5), 720–748 (1999)
Burton, F.W., Sleep, M.R.: Executing Functional Programs on a Virtual Tree of Processors. In: Proc. of the 1981 Conf. on Functional Programming Languages and Computer Architecture, FPCA 1981, pp. 187–194 (1981)
Chapman, B., LaGrone, J.: OpenMP. In: Encyclopedia of Parallel Computing, pp. 1365–1371. Springer (2011)
Charles, P., Donawa, C., Ebcioglu, K., et al.: X10: An Object-Oriented Approach to Non-Uniform Cluster Computing. In: OOPSLA 2005, pp. 519–538 (2005)
Cong, G., Kodali, S., Krishnamoorthy, S., et al.: Solving Large, Irregular Graph Problems Using Adaptive Work-Stealing. In: International Conf. on Parallel Processing, pp. 536–545 (2008)
Dailey, D., Leiserson, C.: Using Cilk to Write Multiprocessor Chess Programs. J. Int. Computer Chess Assoc. 24(4), 236–237 (2002)
Frigo, M., Leiserson, C., Randall, K.: The Implementation of the Cilk-5 Multithreaded Language. In: ACM SIGPLAN 1998 Conf. on Programming Language Design and Implementation, pp. 212–223 (1998)
Gabriel, R.: Performance Evaluation of Lisp Systems. MIT Press, Cambridge (1985)
Halstead, R.: Multilisp: A Language for Concurrent Symbolic Computation. ACM Trans. on Programming Languages and Systems 7(4), 501–538 (1985)
Halstead, R.: Vista: un outil générique pour visualiser l’exécution de programmes parallèles. In: Proc. JFLA 1996, Journées Francophones des Langages Applicatifs, INRIA—Collection Didactique, pp. 3–24 (1996) ISBN 2-7261-0944-6 (in French)
Halstead, R.: Multilisp. In: Encyclopedia of Parallel Computing, pp. 1216–1222. Springer (2011)
Hauck, E., Dent, B.: Burroughs’ B6500/B7500 Stack Mechanism. In: Proc. AFIPS Spring Joint Computer Conf., pp. 245–251 (1968)
Intel Corporation: Intel R C++ Compiler 12.0 User and Reference Guides (September 2010), Document Number 323271-011US (2010)
Intel Corporation: Intel Threading Building Blocks Reference Manual, http://www.threadingbuildingblocks.org/documentation.php
Keller, R., Lindstrom, G., Patil, S.: A Loosely-Coupled Applicative Multi-Processing System. In: NCC 1979, AFIPS Conf. Proceedings, vol. 48, pp. 613–622 (1979)
Kornfeld, W., Hewitt, C.: The Scientific Community Metaphor. IEEE Trans. on Systems, Man, and Cybernetics 11(1), 24–33 (1981)
Lee, I., Wickizer, S., Huang, Z., Leiserson, C.: Using Memory Mapping to Support Cactus Stacks in Work-Stealing Runtime Systems. In: PACT 2010, pp. 411–420. ACM (2010)
Leiserson, C.: The Cilk++ Concurrency Platform, J. Supercomputing 51(3), 244–257 (2010)
Leiserson, C.: Cilk. In: Encyclopedia of Parallel Computing, pp. 273–288. Springer (2011)
Krall, E., McGehearty, P.: A Case Study of Parallel Execution of a Rule-Based Expert System. Int’l J. of Parallel Programming 15(1), 5–32 (1986)
Kranz, D., Halstead, R., Mohr, E.: Mul-T: A High-Performance Parallel Lisp. In: ACM SIGPLAN 1989 Conf. on Programming Language Design and Implementation, pp. 81–90 (1989)
Mohr, E., Kranz, D., Halstead, R.: Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs. IEEE Trans. Parallel and Distributed Systems 2(3), 264–280 (1991)
Nichols, B., Buttlar, D., Farrell, J.: Pthreads Programming: A POSIX Standard for Better Multiprocessing. O’Reilly, Sebastopol (1996)
Osborne, R.: Speculative Computation in Multilisp. Tech. Report MIT/LCS/TR-464, MIT Laboratory for Computer Science, Cambridge, MA (1989)
Osborne, R.: Speculative Computation in Multilisp. In: Ito, T., Halstead Jr., R.H. (eds.) US/Japan WS 1989. LNCS, vol. 441, pp. 103–137. Springer, Heidelberg (1990)
Reinders, J.: Intel Threading Building Blocks: Outfitting C++ for Multi-Core Processor Parallelism. O’Reilly, Sebastopol (2007)
Robison, A.: Intel Threading Building Blocks (TBB). In: Encyclopedia of Parallel Computing, pp. 955–964. Springer (2011)
Steele, G., Allen, A., Chase, D., et al.: Fortress (Sun HPCS Language). In: Encyclopedia of Parallel Computing, pp. 718–735. Springer (2011)
Taura, K., Kaneda, K., Endo, T., Yonezawa, A.: Phoenix: A Parallel Programming Model for Accommodating Dynamically Joining/Leaving Resources. In: ACM PPoPP 2003, pp. 216–229 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Halstead, R.H. (2014). Past and Future Directions for Concurrent Task Scheduling. In: Agha, G., et al. Concurrent Objects and Beyond. Lecture Notes in Computer Science, vol 8665. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44471-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-662-44471-9_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-44470-2
Online ISBN: 978-3-662-44471-9
eBook Packages: Computer ScienceComputer Science (R0)