An architecture that efficiently updates associative aggregates in applicative programming languages

  • John T. O'Donnell
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 201)


Applicative (also called functional) programming systems prohibit side effects, including assignments to variables. This restriction has several advantages, including referential transparency and potential parallel program execution. A major disadvantage, however, is that aggregate data structures become very expensive to maintain: when the programmer updates a single element in an aggregate, many applicative language implementations must completely recopy the aggregate. This paper solves the aggregate update problem with a two-level architecture: a microprogram maintains “associative aggregate” data structures, and a hardware memory design (the Associative Aggregate Machine) implements powerful insertion, deletion and searching operations required by the microprogram. The Associative Aggregate Machine contains a linear sequence of cells comprising storage and combinational logic. Each cell is connected to its predecessor and successor, so the sequence of cells forms a shift register that supports insertion and deletion. In addition, a binary tree of combinational logic nodes performs fast associative searching through the sequence of cells. The Associative Aggregate Machine architecture is extremely regular and is well suited for VLSI implementation.


Garbage Collection Combinational Logic Storage Cell Conventional Architecture Storage Cycle 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

8. References

  1. 1.
    John Backus, “Can Programming be Liberated from the von Neumann Style?”, CACM Vol. 21, No. 8, 1978.Google Scholar
  2. 2.
    John Darlington and Mike Reeve, “Alice: A Multi-Processor Reduction Machine for the Parallel Evaluation of Applicative Languages”, Proceedings of the 1981 Conference on Functional Programming Languages and Computer Architecture, ACM, New York, 1981.Google Scholar
  3. 3.
    Caxton C. Foster, Content Addressable Parallel Processors, Van Nostrand Reinhold Co., New York, 1976.Google Scholar
  4. 4.
    Michael J. C. Gordon, The Denotational Description of Programming Languages; an Introduction, Springer-Verlag, New York, 1979.Google Scholar
  5. 5.
    Paul Hudak and Adrienne Bloss, “The Aggregate Update Problem in Functional Programming Systems”, Conference Record of the Twelfth Annual ACM Symposium on Principles of Programming Languages, 1985.Google Scholar
  6. 6.
    Robert M. Keller, “Divide and CONCer: Data Structuring in Applicative Multi-processing Systems”, Conference Record of the 1980 LISP Conference.Google Scholar
  7. 7.
    David J. Kuck, The Structure of Computers and Computations, Vol. 1., John Wiley & Sons, New York, 1978.Google Scholar
  8. 8.
    Kai Li and Paul Hudak, “A New List Compaction Method”, Research Report YALEU/DCS/RR-362, Yale University, New Haven, 1985.Google Scholar
  9. 9.
    Gyula A. Mago, “A Network of Microprocessors to Execute Reduction Languages” (Parts 1 and 2), International Journal of Computer and Information Sciences, Vol. 8, No. 5 (Part 1) and No. 6 (Part 2), 1979.Google Scholar
  10. 10.
    Carver Mead and Lynn Conway, Introduction to VLSI Systems, Addison-Wesley, Reading, Mass., 1980.Google Scholar
  11. 11.
    Alan Mycroft, Abstract Interpretation and Optimising Transformations for Applicative Programs, CST-15-81, University of Edinburgh, 1981.Google Scholar
  12. 12.
    John T. O'Donnell, A Systolic Associative LISP Computer Architecture with Incremental Parallel Storage Management, Technical Report 81-5, Computer Science Department, The University of Iowa, Iowa City, 1981.Google Scholar
  13. 13.
    John T. O'Donnell, “An Efficient Architecture for Implementing Sparse Array Variables”, to appear.Google Scholar
  14. 14.
    Thomas A. Ottman, Arnold L. Rosenberg and Larry J. Stockmeyer, “A Dictionary Machine (for VLSI)”, IEEE Transactions on Computers, Vol. C-31, No. 9, 1982.Google Scholar
  15. 15.
    Ravi Sethi, “Pebble Games for Studying Storage Sharing”, Theoretical Computer Science, Vol. 19, No. 1, 1982.Google Scholar
  16. 16.
    Philip C. Treleaven, “Computer Architecture for Functional Programming”, Functional Programming and its Applications, J. Darlington, P. Henderson and D. A. Turner (ed.), Cambridge University Press, Cambridge, 1982.Google Scholar
  17. 17.
    Philip C. Treleaven, David R. Brownbridge and Richard P. Hopkins, “Data-Driven and Demand-Driven Computer Architecture”, ACM Computing Surveys, Vol. 14, No. 1, March, 1982.Google Scholar
  18. 18.
    Jeffrey D. Ullman, Computational Aspects of VLSI, Computer Science Press, Rockville, Maryland, 1984.Google Scholar
  19. 19.
    Steven R. Vegdahl, “A Survey of Proposed Architectures for the Execution of Functional Languages”, IEEE Transactions on Computers, Vol. C-33, No. 12, December 1984.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1985

Authors and Affiliations

  • John T. O'Donnell
    • 1
  1. 1.Computer Science DepartmentIndiana UniversityBloomington

Personalised recommendations