Non homogenous parallel memory operations in a VLIW machine

  • R. Milikowski
  • W. G. Vree
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 854)


The G-hinge is a VLIW machine that exploits instruction level parallelism in scalar and memory intensive applications. The memory architecture offers the possibility to execute multiple non homogenous operations in parallel. A configuration of independently programmable memory modules is the base of this architecture. The parallel memory instructions and the datapath operations are packed together in VLIW instructions. LML, a lazy functional language, is implemented on the G-hinge. The architecture is described. It is shown that the architecture is capable to exploit available fine grain parallelism on the stack and the heap.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    R. Milikowski, W.G. Vree. “The G-line, a distributed processor for graph reduction”. In E.H.L. Aarts, J. van Leeuwen, and M. Rem, editors, Parallel architectures and languages (PARLE), LCNS 505/506, Veldhoven, Netherlands, June 1991, Springer Verlag.Google Scholar
  2. [2]
    P. Hudak, ed. and P.L. Wadler, ed. “Report on the programming language Haskell — a non-strict purely functional language, version 1.0”. TR, Dept of CS, Univ. of Glasgow, Scotland, 1990.Google Scholar
  3. [3]
    P.H. Hartel; “Performance of lazy combinator graph reduction”. Journal of Structured ProgrammingGoogle Scholar
  4. [4]
    L. Augustsson and T. Johnsson. “Lazy ML user's manual”. Göteborg, 1991.Google Scholar
  5. [5]
    D.W. Wall. “Limits of Instruction-Level-Parallelism”. Proceedings of the Fourth ASPLOS Conferencee, April 1991.Google Scholar
  6. [6]
    S.L. Peyton Jones. “The implementation of functional programming languages”. Prentice Hall, Eaglewood Cliffs, New Jersey, 1987.Google Scholar
  7. [7]
    P.H. Hartel, H.W. Glaser, and J.M. Wild. “Compilation of functional languages using flow graph analysis”. Technical report CSTR 91-03, Dept of Electr. and Comp. Sci, Univ. of Southampton, UK, January 1991.Google Scholar
  8. [8]
    M.D. Smith, M. Johnson, M.A. Horowitz. “Limits of Multiple Instruction Issue”, Proceedings of the Third ASPLOS Conferencee,Systems.April 1989.Google Scholar
  9. [9]
    Monica S. Lam and Robert P. Wilson. “Limits of Control Flow Parallelism”, Proceedings of the 19th Annual International Symposium on Computer Architecture.Google Scholar
  10. [10]
    Michael Butler, e.a. “Single Instruction Stream Parallelism Ia Greater Than Two”. Proceedings of the 18th Annual International Symposium on Computer Architecture. pp 46–57, Toronto, 1992.Google Scholar
  11. [11]
    Kevin B. Theodorus, Guang R. Gao, Laurie J. Hendren; “On the Limits of Program Parallelism and its Smoothability”. Proc. Micro25; 1993.Google Scholar
  12. [12]
    S. Smetsers, E.G.J.M.H. Nöcker, J. van Groningen, and J.M. Plasmeijer. “Generating efficient code for lazy functional languages”. In R.J.M. Hughes, editor, 5th Functional languages and Computer Architecture, LCNS 523, Cambridge, Massachusetts, September 1991. Springer Verlag.Google Scholar
  13. [13]
    B. Ramakrishna Rau and Joseph A. Fisher; Instruction-level Parallel Processing: History, Overview, and Perspective, The Journal of Supercomputing (7), pp 9–50, 1993.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • R. Milikowski
    • 1
  • W. G. Vree
    • 1
  1. 1.Department of Computer SystemsUniversity of AmsterdamSJ AmsterdamThe Netherlands

Personalised recommendations