Faim-1: An Architecture for Symbolic Multiprocessing

  • Alan L. Davis
Part of the The Kluwer International Series in Engineering and Computer Science book series (SECS, volume 26)


The FAIM-1 system is an attempt to provide a significant performance gain over conventional machines in support of symbolic artificial intelligence (Al) processing. The primary performance-enhancement mechanism is to consistently exploit concurrency at all levels of the architecture. In designing FAIM-1, prime consideration was given to programmability, performance, extensibility, fault tolerance, and the cost-effective use of modern circuit and packaging technology.


Garbage Collection Communication Topology Buffer Pool Instruction Storage Content Addressable Memory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    C. L. Seitz, “System Timing.” In Introduction to VLSI Systems, New York: McGraw-Hill, 1979.Google Scholar
  2. 2.
    H. Stopper, “A Wafer with Electrically Programmable Interconnections,” In Proc. 1985 IEEE International Solid State Circuits Conference, 1985, pp. 268–269.CrossRefGoogle Scholar
  3. 3.
    A. L. Davis and S. V. Robison, “An Overview of the FAIM-1 Multiprocessing System,” Proc. First Annual Artificial Intelligence and Advanced Computer Technology Conference. Wheaton IL: Tower Conference Management Company, 1985, pp. 291–296.Google Scholar
  4. 4.
    W. D. Clinger, The revised Revised Report on Scheme. Tech. Rep. 174, Computer Science Department, Indiana University, 1985.Google Scholar
  5. 5.
    R. H. Halstead, Jr., “Parallel Symbolic Computing,” Computer, 19(8):35–43.Google Scholar
  6. 6.
    A. Goldberg and D. Robson, Smalltalk-80: The Language and its Implementation. Menlo Park. CA: Addison Wesley, 1983.MATHGoogle Scholar
  7. 7.
    W. C. Athas and C. L. Seitz, Cantor User Report. Tech. Rep. 5232:TR:86, Department of Computer Science, California Institute of Technology, Pasadena, CA, 1986.Google Scholar
  8. 8.
    W. F. Clocksin and C. S. Mellish, Programming in Prolog. New York: Springer-Verlag, 1981.MATHGoogle Scholar
  9. 9.
    P. H. Winston and B. K. P. Horn, Lisp. Reading, MA: Addison-Wesley, 1980.Google Scholar
  10. 10.
    A. J. Martin, “The Torus: An Exercise in Constructing a Processing Surface,” Proc. of the 2nd Caltech Conference on VLSI. 1981, pp. 527–538.Google Scholar
  11. 11.
    D. P. Siewiorek and R. S. Swarz, The Theory and Practice of Reliable System Design, Bedford, MA: Digital Press, 1982.Google Scholar
  12. 12.
    D. Gordon, I. Koren and G. M. Silberman, Fault-Tolerance in VLSI Hexagonal Arrays. Tech. Rep. Department of Electrical Engineering, Technion. Israel.Google Scholar
  13. 13.
    T. F. Knight, Jr., D. A. Moon, J. Holloway and G. L. Steele, Jr., CADR. Tech. Rep. 528, MIT Artificial Intelligence Laboratory, Cambridge, MA, 1981.Google Scholar
  14. 14.
    B. W. Lampson, K. A. Pier, G. A. McDaniel, S. M. Ornstein and D. W. Clark, The Dorado: A High-Performance Perosnal Computer. Tech. Rep. CSL-81-1. Xerox Palo Alto Research Center, Palo Alto, CA, 1981.Google Scholar
  15. 15.
    J. L. Hennessy, “VLSI Processor Architecture,” IEEE Trans. on Computers, C-33 (12):1221–1246.Google Scholar
  16. 16.
    D. Warren, Implementing Prolog. Tech. Rep. 39. Edinburgh University, Edinburgh: Scotland, 1977.Google Scholar
  17. 17.
    W. S. Coates, The Design of an Instruction Stream Memory Subsystem. Master’s thesis, Univeristy of Calgary, Calgary, Alberta, Canada, 1985.Google Scholar
  18. 18.
    M. D. Hill and A. J. Smith, “Experimental evaluation of on-chip microprocessor cache memories.” Proc. 11th Annual Symp. on Computer Architecture, Ann Arbor. MI, 1984, pp. 158–166.Google Scholar
  19. 19.
    I. Flores, “Lookahead Control in the IBM System/370 Model 165P.” Computer, 1974.Google Scholar
  20. 20.
    S. Weiss and J. E. Smith, “Instruction Issue Logic for Pipelined Supercomputers.” Proc. 11th Annual Sym. on Computer Architecture. Ann Arbor, MI, 1984, pp. 110–118.Google Scholar
  21. 21.
    S. A. Przybylski, T. R. Gross, J. L. Hennessy, N. P. Jouppi and C. Rowen, “Organization and VLSI Implementation of MIPS.” Journal of VLSI and Computing Systems, 1 (3): 170–208.Google Scholar
  22. 22.
    T. R. Gross, Code Optimization of Pipeline Constraints. PhD thesis, Stanford University, Stanford, CA, 1983.Google Scholar
  23. 23.
    K. S. Stevens, The Communications Framework for a Distributed Ensemble Architecture. AI 47. Schlumberger Palo Alto Research, Palo Alto, CA, 1986.Google Scholar
  24. 24.
    K. S. Stevens, S. V. Robison and A. L. Davis, “The Post Office—Communication Support for Distributed Ensemble Architectures,” Proc. 6th Int. Conf. on Distributed Computing Systems, pp. 160–166.Google Scholar

Copyright information

© Kluwer Academic Publishers 1988

Authors and Affiliations

  • Alan L. Davis

There are no affiliations available

Personalised recommendations