Advertisement

LISP and Symbolic Computation

, Volume 5, Issue 1–2, pp 7–47 | Cite as

Mayfly: A general-purpose, scalable, parallel processing architecture

  • Al Davis
Article

Abstract

TheMayfly is a scalable general-purpose parallel processing system being designed at HP Laboratories, in collaboration with colleagues at the University of Utah. The system is intended to efficiently support parallel variants of modern programming languages such as Lisp, Prolog, and Object Oriented Programming models. These languages impose a common requirement on the hardware platform to supportdynamic system needs such as runtime type checking and dynamic storage management. The main programming language for the Mayfly is a concurrent dialect of Scheme. The system is based on a distributed-memory model, and communication between processing elements is supported by message passing. The initial prototype of Mayfly will consist of 19 identical processing elements interconnected in a hexagonal mesh structure. In order to achieve the goal of scalable performance, each processing element is a parallel processor as well, which permits the application code, runtime operating system, and communication to all run in parallel. A 7 processing element subset of the prototype is presently operational. This paper describes the hardware architecture after a brief background synopsis of the software system structure.

Keywords

Processing Element Hardware Architecture Object Orient Program Identical Processing Element Parallel Processing System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adams, D. A.A model for parallel computations, pages 311–333. Spartan Books, 1970.Google Scholar
  2. 2.
    Amdahl, G. M. Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities. In Thompson, editor,AFIPS Conference Proceedings, pages 483–485, Washington, DC, 1967.Google Scholar
  3. 3.
    Anderson, G. A. and Jensen, E. D. Computer Interconnection Structures: Taxonomy, Characteristics, and Examples.Computing Surveys, 7(4):197–213, December 1975.Google Scholar
  4. 4.
    Arvind and Nikhil, R. S. Executing a Program on the MIT Tagged-Token Dataflow Architecture.IEEE Transactions on Computers, 39(3):300–318, March 1990.Google Scholar
  5. 5.
    Barnes, G. H. et al. The ILLIAC IV Computer.IEEE Transactions on Computers, 8(C-17):746–757, August 1968.Google Scholar
  6. 6.
    Blattman, K., Lettang, Frank, Shakouri, Shahrokh, and Sharma, Amit.Titan: A Floating Point Accelerator Board for Stirling 1.5 Based Systems. Hewlett-Packard Systems Technology Division, a1406-60003 revision 1.4 edition, May 1989.Google Scholar
  7. 7.
    Conery, J. S. Binding environments for parallel logic programs in non-shared memory multiprocessors.Int. J. Parallel Programming, 17(2):125–152, April 1988.Google Scholar
  8. 8.
    Cray Research Corporation. CRAY-1 Computer System reference manual. Technical Report 2240004, CRAY Research Incorporated, 1976.Google Scholar
  9. 9.
    Dally, W. J. et al. The J-Machine: A Fine-Grain Concurrent Computer. In G. X. Ritter, editor,Proceedings of the IFIPS Conference, pages 1147–1153. North-Holland, June 1989.Google Scholar
  10. 10.
    Davis, A. L. and Robison, S. V. “The Architecture of the FAIM-1 Symbolic Multiprocessing System”. InProceedings of the Ninth International Joint Conference on Artificial Intelligence, pages 32 – 38. Morgan - Kauffmann, August 1985.Google Scholar
  11. 11.
    Daya, M. A study in parallel program schemata. Technical Report TR 2-75, Harvard University, Center for Research in Computing Technology, 1975.Google Scholar
  12. 12.
    Duncan, R. A Survey of Parallel Computer Architectures.Computer, 23(2):5–16, February 1990.Google Scholar
  13. 13.
    Evans, J. D. DPOS Programming Manual. Technical Report UUCS-90-020, University of Utah Computer Science Department, Salt Lake City, October 1990.Google Scholar
  14. 14.
    Flynn, M. J. Some Computer Organizations and Their Effectiveness.IEEE Transactions on Computers, C-21(9):948–960, September 1972.Google Scholar
  15. 15.
    Fujimoto, R. M.VLSI Communication Components for Multicomputer Networks. PhD thesis, Univ. of California at Berkeley, August 1983.Google Scholar
  16. 16.
    Gordon, D., Koren, I., and Silberman, G. M. Fault-Tolerance in VLSI Hexagonal Arrays. Technical report, Dept. of Electrical Engineering, Technion, 1984.Google Scholar
  17. 17.
    Hawkinson, S., Gustafson, H. L., and Scott, K. The architecture of a homogeneous vector supercomputer. InProceedings of the International Conference on Parallel Processing, pages 649–652, August 1986. FPS T series description.Google Scholar
  18. 18.
    Hayes, J. P. et al. A microprocessor-based hypercube supercomputer.IEEE Micro, 6(10):6–17, October 1986. Ncube/ten description.Google Scholar
  19. 19.
    Hillis, W. D. The Connection Machine (Computer Architecture for the New Wave). Technical Report 646, MIT Artificial Intelligence Laboratory, September 1981.Google Scholar
  20. 20.
    Hudak, P.Object and Task Reclamation in Distributed Applicative Progressing Systems. PhD thesis, University of Utah, August 1982.Google Scholar
  21. 21.
    Karp, R. M. and Miller, R. E. Parallel program schemata.J. Comp. and Syst. Sci., 3(2):147–195, May 1969.Google Scholar
  22. 22.
    Kessler, R., Carr, H., Swanson, M., and Stoller, L. Mayfly System Software. Technical Report HPL-SAL-89-24, Hewlett-Packard Laboratories, April 1989.Google Scholar
  23. 23.
    Lyon., R. F. Bit-Serial Computer Architectures. In L. R. Talbert and D. K. Korbuly, editors,New Frontiers in Computer Architecture Proceedings, pages 37–49, Santa Monica, CA, March 1986. Citicorp/TTI.Google Scholar
  24. 24.
    Mahaon, M. J. et al. Hewlett-Packard Precision Architecture: The Processor.Hewlett-Packard Journal, 37(8):4–21, August 1986.Google Scholar
  25. 25.
    Martin, Alain J. The Torus: An Excercise in Constructing a Processing Surface. InProc. of the Second Caltech Conference on VLSI, pages 527–538, 1981.Google Scholar
  26. 26.
    Mead, C. and Conway, L.System Timing, chapter 7, pages 218–262. McGraw-Hill, 1979.Google Scholar
  27. 27.
    Papadopoulos, G. M. and Culler, D. E. Monsoon: An Explicit Token Store Architecture. InProceedings of the 17th International Symposium on Computer Architecture, May 1990.Google Scholar
  28. 28.
    Rashid, R. F. Threads of a New System. (Mach, a multiprocessor operating system).UNIX Review, 4(8):37–49, August 1986.Google Scholar
  29. 29.
    Rattner, J. Concurrent Processing: A new direction in scientific computing. InProceedings of the National Computing Conference, pages 157–166, 1985. iPSC description.Google Scholar
  30. 30.
    Rau, B. R., Glaeser, C. D., and Picard, R. L. Efficient Code Generation for Horizontal Architectures: Compiler Techniques and Architectural Support. InProceedings of the 9th Annual IEEE Symposium on Computer Architecture, April 1982.Google Scholar
  31. 31.
    Rettberg, R. D., Crowther, W. R., Carvey, P. P., and Tomlinson, R. S. The Monarch Parallel Processor Hardware Design.Computer, 23(4):18–30, April 1990.Google Scholar
  32. 32.
    Robinson, I. Chameleon: A Pattern Matching Memory System. SAL Technical Report HPL-SAL-89-24, HP Laboratories, April 1989.Google Scholar
  33. 33.
    Sanguinetti, John. Performance of a Message-Based Multiprocessor.Computer, 19(9):47–55, September 1986.Google Scholar
  34. 34.
    Seitz, Charles L. “The Cosmic Cube”.Communications of the ACM, 28(1):22–33, January 1984.Google Scholar
  35. 35.
    Siewiorek, D. P. and Swarz, R. S.The Theory and Practice of Reliable System Design. Digital Press, Bedford, Mass., 1982.Google Scholar
  36. 36.
    Singh, V.Distributing Backward-Chaining Deductions to Multiple Processors. PhD thesis, Stanford, April 1988.Google Scholar
  37. 37.
    Stevens, Kenneth S. The Communications Framework for a Distributed Ensemble Architecture. AI 47, Schlumberger Palo Alto Research, February 1986.Google Scholar
  38. 38.
    Stevens, Kenneth S., Robison, Shane V., and Davis, A. L. “The Post Office — Communication Support for Distributed Ensemble Architectures”. InProceedings of 6th International Conference on Distributed Computing Systems, pages 160 – 166, May 1986.Google Scholar
  39. 39.
    Thacker, C. P. Cache Strategies for Shared-Memory Multiprocessors. In L. R. Talbert and D. K. Korbuly, editors,New Frontiers in Computer Architecture Proceedings, pages 51–61, Santa Monica, CA, March 1986. Citicorp/TTI.Google Scholar
  40. 40.
    Tucker, L. W. and Robertson, G. G. Architecture and Applications of the Connection Machine.Computer, 21(8):26–38, August 1988.Google Scholar

Copyright information

© Kluwer Academic Publishers 1992

Authors and Affiliations

  • Al Davis
    • 1
  1. 1.Hewlett-Packard Laboratories 3LPalo Alto

Personalised recommendations