Advertisement

LISP and Symbolic Computation

, Volume 5, Issue 1–2, pp 73–93 | Cite as

Implementing concurrent scheme for the Mayfly distributed parallel processing system

  • R. Kessler
  • H Carr
  • L. Stoller
  • M. Swanson
Article

Abstract

This paper discusses a parallel Lisp system developed for a distributed memory, parallel processor, the Mayfly. The language has been adapted to the problems of distributed data by providing a tight coupling of control and data, including mechanisms for mutual exclusion and data sharing. The language was primarily designed to execute on the Mayfly, but also runs on networked workstations. Initially, we show the relevant parts of the language as seen by the user. Then we concentrate on the system Lisp level implementation of these constructs with particular attention toagents, a mechanism for limiting the cost of remote operations. Briefly mentioned are the low-level kernel hardware and software support of the system Lisp primitives.

Keywords

Operating System Artificial Intelligence Processing System Parallel Processing Data Sharing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abelson, H. and Sussman, G.Structure and Interpretation of Computer Programs. MIT Press, 1985.Google Scholar
  2. 2.
    Adams, Norman and Rees, Johnathan. Object-oriented programming in scheme. InProceedings of the 1988 ACM Conference on Lisp and Functional Programming, pages 277–288, July 1988.Google Scholar
  3. 3.
    Bacarisse, B. Remote closures. Technical Report HPL-ACS-88-41, Hewlett-Packard Research Laboratory, August 1988.Google Scholar
  4. 4.
    Baker, H. and Hewitt, C. The incremental garbage collection of processes. AI Memo AIM-454, MIT AI Laboratory, Cambridge MA, December 1977.Google Scholar
  5. 5.
    Baker, H. G. List processing in real time on a serial computer.Communications of the ACM, 21(4):280–294, April 1978.Google Scholar
  6. 6.
    BBN Advanced Computers, Inc., Cambridge MA.Butterfly Parallel Processor Overview, 1987.Google Scholar
  7. 7.
    Billstrom, David, Brandenburg, Joseph, and Teeter, John. Cclisp on the ipsc concurrent computer. InProc. 6th National Conference on Artificial Intelligence, 1987.Google Scholar
  8. 8.
    Clamen, L. D., Leibengood, S. M., Nettles, J. M., and Wing, J. Reliable distributed computing with avalon/common lisp. In1990 International Conference on Computer Languages, 1990.Google Scholar
  9. 9.
    Davis, A. L. Mayfly: A general-purpose, scalable, parallel processing architecture. In(Submitted) International Joint Conference on Artificial Intelligence, 1991.Google Scholar
  10. 10.
    Davis, A. L. and Robison, S. V. The architecture of the faim-1 symbolic multiprocessing system. InProc. IJCAI-85, pages 32–38, 1985.Google Scholar
  11. 11.
    Evans, John D. and Kessler, Robert R. Dpos: A metalanguage and programming environment for parallel processing. In(Submitted) International Joint Conference on Artificial Intelligence 1991, 1991.Google Scholar
  12. 12.
    Fenichel, R. R. and Yochelson, J. C. A lisp garbage-collector for virtual memory computer systems.Communications of the ACM, 12(11), November 1969.Google Scholar
  13. 13.
    Goldman, R. and Gabriel, R. P. Preliminary Results with the Initial Implementation of Qlisp. InProc. 1988 ACM Conference on Lisp and Functional Programming, 1988.Google Scholar
  14. 14.
    Halstead Jr., R. H. Multilisp: A language for concurrent symbolic computation.ACM Transactions on Programming Languages and Systems, 7(4):501–538, October 1985.Google Scholar
  15. 15.
    Halstead Jr., R. H. An Assessment of Multilisp: Lessons from Experience.International Journal of Parallel Programming, 15(6):459–501, December 1986.Google Scholar
  16. 16.
    Hoare, C. A. R. Monitors: An operating system structuring concept.Communications of the ACM, 17(10):549–557, October 1974.Google Scholar
  17. 17.
    Jefferson, David R. Virtual time.ACM Transactions on Programming Languages and Systems, 7(3):404–425, July 1985.Google Scholar
  18. 18.
    Kessler, R. R. and Swanson, M. R. Concurrent scheme. In T. Ito and R. H. Halstead, editors,Lecture Notes in Computer Science, Parallel Lisp: Languages and Systems, pages 200–234. Springer-Verlag, 1990.Google Scholar
  19. 19.
    Knuth, D. E.Fundamental Algorithms, The Art of Computer Programming, Volume 1. Addison Wesley, 1968.Google Scholar
  20. 20.
    Leffler, S. J. et. al. An advanced 4.3bsd interprocess communication tutorial. Computer science research group, University of California, Berkeley, Department of Electrical Engineering and Computer Science, 1986.Google Scholar
  21. 21.
    Mackey, M. An evaluation of concurrent scheme. Hewlett-Packard Pisa Science Center Technical Report HPL-PSC-90-9, Hewlett-Packard, Pisa, Italy, December 1990.Google Scholar
  22. 22.
    Marti, Jed. RISE: The RAND integrated simulation environment. In Brian Unger and David Jefferson, editors,Distributed Simulation, volume 19, San Diego, California, 1988. Simulation Councils, Inc.Google Scholar
  23. 23.
    Miller, J. S.MultiScheme, A Parallel Processing System Based on MIT Scheme. PhD thesis, Department of Electrical Engineering and Computer Science, MIT, August 1987.Google Scholar
  24. 24.
    Nierstrasz, O. M. Active Objects in Hybrid. InObject-Oriented Programming Systems, Languages, and Applications 1987 Conference Proceedings, pages 243–253, 1987.Google Scholar
  25. 25.
    Rashid, R. F. From RIG to Accent to Mach: The Evolution of a Network Operating System. InProceedings of the ACM/IEEE Computer Society, Fall Joint Computer Conference, ACM, November 1986.Google Scholar
  26. 26.
    Rashid, R. F. et. al. Machine-independent virtual memory management for paged uniprocessor and multiprocessor architectures. InProceedings of the 2nd Symposium on Architectural Support for Programming Languages and Operating Systems, ACM, October 1987.Google Scholar
  27. 27.
    Rees, J. and Clinger, W. Revised3 Report on the Algorithmic Language Scheme.SIGPLAN Notices, 21(12):37–79, December 1986.Google Scholar
  28. 28.
    Sechrest, S. An introductory 4.3bsd interprocess communication tutorial. Computer science research group, University of California, Berkeley, Department of Electrical Engineering and Computer Science, 1986.Google Scholar
  29. 29.
    Stevens, Kenneth S. The Communications Framework for a Distributed Ensemble Architecture. AI 47, Schlumberger Palo Alto Research, February 1986.Google Scholar
  30. 30.
    Stevens, Kenneth S., Robison, Shane V., and Davis, A. L. The Post Office — Communication Support for Distributed Ensemble Architectures. InProceedings of 6th International Conference on Distributed Computing Systems, pages 160 – 166, May 1986.Google Scholar
  31. 31.
    Swanson, M., Kessler, R., and Lindstrom, G. An Implementation of Portable Standard LISP on the BBN Butterfly. InProc. 1988 ACM Conference on Lisp and Functional Programming, pages 132–142, 1988.Google Scholar
  32. 32.
    Swanson, M. R.DOMAINS-A Mechanism for Specifying Mutual Exclusion and Disciplined Data Sharing in Concurrent Symbolic Programs. PhD thesis, Department of Computer Science, University of Utah, June 1991.Google Scholar
  33. 33.
    Zorn, B., Ho, K., Larus, J., Semenzato, L., and Hilfinger, P. Multiprocessing extensions in spur lisp.IEEE Software, 6(4):41–49, July 1989.Google Scholar

Copyright information

© Kluwer Academic Publishers 1992

Authors and Affiliations

  • R. Kessler
    • 1
  • H Carr
    • 1
  • L. Stoller
    • 1
  • M. Swanson
    • 1
  1. 1.Center for Software Science Department of Computer ScienceUniversity of UtahSalt Lake CityU.S.A

Personalised recommendations