Partitioning and scheduling of parallel functional programs using complexity information

  • Piyush Maheshwari
Parallel Processing And Systems
Part of the Lecture Notes in Computer Science book series (LNCS, volume 497)


This paper will discuss how to exploit parallelism efficiently by improving the granularity of functional programs on a multiprocessor. Asymptotic complexity analyses of a function, to estimate the computation time and also the communication involved in sending the arguments and receiving the results from the remote processor, are found to be quite useful. It is shown how some parallel programs can be run more efficiently with the prior information of time complexities (in big-O notation) and relative time complexities of its sub-expressions with the help of analytical reasoning and some practical examples on the larger-grain distributed multiprocessor machine LAGER. Ordered scheduling of the processes, determined by the priorities based on the relative time complexities, shows further improvement on the run-time dynamic load balancing methods and better utilisation of the resources.


Functional programming granularity partitioning scheduling time and communication complexity functions 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Allen et al. 86]
    R. Allen, K. Kennedy, and A. Porterfield. PTOOL: A semi-automatic parallel programming assistant. Proceedings of the International Conference on Parallel Processing, 164–170, 1986.Google Scholar
  2. [Appelbe et al. 89]
    B. Appelbe, K. Smith, C.E. McDowell. Start/Pat: A parallel-programming toolkit. IEEE Software, 6(4):29–38, 1989.CrossRefGoogle Scholar
  3. [Arvind et al. 88]
    Arvind, D.E. Culler, G.K. Maa. Assessing the benefits of grain parallelism in dataflow programs. The International Journal of Supercomputer Applications, 2 (3):10–36, 1988.Google Scholar
  4. [Babb 84]
    R.G. Babb. Parallel processing with large-grain dataflow techniques. IEEE Computer, 7:55–61, 1984.Google Scholar
  5. [Backus 78]
    J. Backus. Can programming be liberated from the von Neumann Style? A Functional Style and its Algebra of Programs. Comm. ACM, Vol. 21, No. 8, pp. 613–641, August 1978.CrossRefGoogle Scholar
  6. [Cripps et al. 87]
    M.D. Cripps, J. Darlington, A.J. Field, P.G. Harrison, and M. Reeve. The Design and Implementation of ALICE: A Parallel Graph Reduction Machine. In Selected Reprints on Dataflow and Reduction Architectures, Shreekant Thakkar (Ed.), IEEE Computer Society Press, 1987.Google Scholar
  7. [Darlington et al. 89]
    J. Darlington, M. Reeve, and S. Wright. Declarative Languages and Program Transformation for Programming Parallel Systems: A Case Study. Department of Computing, Imperial College, London 1989.Google Scholar
  8. [Goldberg 88]
    B. Goldberg. Multiprocessor Execution of Functional Programs. Ph.D. Thesis, YALEU/DCS/RR-618, Department of Computer Science, Yale University, April 1988.Google Scholar
  9. [King and Soper 90]
    A. King and P. Soper. Granularity Control of Concurrent Logic Programs. Technical Report CSTR-90-6, Department of Electronics and Computer Science, Southampton University, Southampton, March 1990.Google Scholar
  10. [Le Métayer 88]
    D. Le Métayer. ACE: An automatic complexity evaluator. ACM Transactions on Programming Languages and Systems, Vol. 10, No. 2, pages 248–266, April 1988.CrossRefGoogle Scholar
  11. [Maheshwari 90]
    Piyush Maheshwari. Controlling Parallelism in Functional Programs using Complexity Information. Ph.D. Thesis, Department of Computer Science, University of Manchester, submitted October 1990.Google Scholar
  12. [Peyton-Jones 87]
    S.L. Peyton-Jones. The Implementation of Functional Programming Languages. Prentice-Hall International, 1987.Google Scholar
  13. [Peyton-Jones et al. 87]
    S.L. Peyton-Jones, C.D. Clack, J. Salkild, and M. Hardie. GRIP — A High-performance Architecture for Parallel Graph Reduction. Proceeding 1988 Conference on Functional Programming Languages and Computer Architecture, Springer-Verlag, LNCS 274, September 1987.Google Scholar
  14. [Rabhi and Manson 90]
    F. Rabhi and G.A. Manson. Using Complexity Functions to Control Parallelism in Functional Programs. Technical Report CS-90-1, Department of Computer Science, University of Sheffield, Sheffield, 1990.Google Scholar
  15. [Rosendahl 89]
    M. Rosendahl. Automatic Complexity analysis. In Proceedings Conference on Functional Programming Languages and Computer Architectures, London, ACM Press, September 1989.Google Scholar
  16. [Sarkar 89]
    Vivek Sarkar. Partitioning and Scheduling Parallel Programs for Multiprocessors. Pitman, London and the MIT Press, Cambridge, Massachusetts, 1989. In the series, Research Monographs in Parallel and Distributed Computing.Google Scholar
  17. [Watson 88]
    Ian Watson. Lager — Interim Report. Ref. No. FS/MU/IW/026-88. Department of Computer Science, University of Manchester, November 1988.Google Scholar
  18. [Watson 89]
    Ian Watson. Simulation of a Physical EDS Machine Architecture. EDS Internal Report, Department of Computer Science, University of Manchester, October 1989.Google Scholar
  19. [Watson et al. 87]
    I. Watson, J. Sargeant, P. Watson, and J.V. Woods. Flagship Computational Models and Machine Architecture. In ICL Technical Journal, Vol. 5, No. 3, May 1987.Google Scholar
  20. [Wegbreit 75]
    B. Wegbreit. Mechanical program analysis. Comm. ACM, 18, 9:528–539, September, 1975.CrossRefGoogle Scholar
  21. [Wong 89]
    P.S. Wong. Parallel Implementation Techniques for Efficient Declarative Systems. Ph.D. Interim Report, Department of Computer Science, University of Manchester, November 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • Piyush Maheshwari
    • 1
    • 2
  1. 1.Department of Computer ScienceUniversity of ManchesterManchesterU.K.
  2. 2.C/o Sri R.A. Maheshwari AdvocateNajibabad

Personalised recommendations