Extent analysis of data fields

  • Björn Lisper
  • Jean -Francois Collard
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 864)


Data parallelism means operating on distributed tables, data fields, in parallel. An abstract model of data parallelism, developed in [10], treats data fields as functions explicitly restricted to a finite set. Data parallel functional languages based on this view would reach a very high level of abstraction. Here we consider two static analyses that, when successful, give information about the extent of a data field with recursively defined elements, in the form of a predicate that is true wherever the data field is defined. This information can be used to preallocate the elements and map them efficiently to distributed memory, and to aid the static scheduling of operations. The predicates can be seen as extensions, providing more detail, of classical data dependency notions like strictness. The analyses are cast in the framework of abstract interpretation: a forward analysis which propagates restrictions on inputs to restrictions on outputs, and a backward analysis which propagates restrictions the other way. For both analyses, fixpoint iteration can sometimes be used to solve the equations that arise. In particular, when the predicates are linear inequalities, integer linear programming methods can be used to detect termination of fixpoint iteration.


data parallelism functional programming abstract interpretation arrays 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    S. Anderson and P. Hudak. Compilation of Haskell array comprehensions for scientific computing. In Proceedings of the ACM SIGPLAN'90 Conference on Programming Language Design and Implementation, pages 137–149. ACM, June 1990.Google Scholar
  2. 2.
    G. E. Blelloch, S. Chatterjee, J. C. Hardwick, J. Sipelstein, and M. Zagha. Implementation of a portable nested data-parallel language. In Proceedings 4th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 102–111, San Diego, May 1993.Google Scholar
  3. 3.
    G. E. Blelloch, and G. W. Sabot. Compiling Collection-Oriented Languages onto Massively Parallel Computers. Journal of Parallel and Distributed Computing, 8:119–134, 1990.CrossRefGoogle Scholar
  4. 4.
    S. Chatterjee, G. E. Blelloch, and A. L. Fisher. Size and access inference for dataparallel programs. In Proc. ACM SIGPLAN'91 Conference on Programming Language Design and Implementation, pages 130–144, June 1991.Google Scholar
  5. 5.
    E. W. Dijkstra. A Discipline of Programming. Prentice Hall, Englewood Cliffs, N.J., 1976.Google Scholar
  6. 6.
    E. Duesterwald, R. Gupta, and M.-L. Soffa. A practical data flow framework for array reference analysis and its use in optimization. In Proc. ACM SIGPLAN'93 Conference on Programming Language Design and Implementation, pages 68–77, June 1993.Google Scholar
  7. 7.
    P. Feautrier. Parametric integer programming. RAIRO Recherche Opérationnelle, 22:243–268, Sept. 1988.Google Scholar
  8. 8.
    J. T. Feo, D. C. Cann, and R. R. Oldehoeft. A report on the SISAL language project. Journal of Parallel and Distributed Computing, 10:349–366, 1990.CrossRefGoogle Scholar
  9. 9.
    R. Gupta. A fresh look at optimizing array bound checking. In Proc. ACM SIGPLAN'90 Conference on Programming Language Design and Implementation, pages 272–282, June 1990.Google Scholar
  10. 10.
    P. Hammarlund and B. Lisper. On the relation between functional and data parallel programming languages. In Proc. Sixth Conference on Functional Programming Languages and Computer Architecture, pages 210–222. ACM Press, June 1993.Google Scholar
  11. 11.
    W. D. Hillis, and G. L. Steele, Jr.. Data parallel algorithms. Comm. ACM, 29(12):1170–1183, Dec. 1986.CrossRefGoogle Scholar
  12. 12.
    J. Hughes. Compile-time analysis of functional programs. In D. A. Turner, editor, Research Topics in Functional Programming, The UT Year of Programming Series, chapter 5, pages 117–153. Addison-Wesley, Reading, MA, 1989.Google Scholar
  13. 13.
    B. Lisper. Linear programming methods for minimizing execution time of indexed computations. In Proc. Int. Workshop on Compilers for Parallel Computers, pages 131–142, Dec. 1990.Google Scholar
  14. 14.
    B. Lisper, and J.-F. Collard. Extent analysis of data fields. Research report, LIP, ENS Lyon, France, 1994. ftp: lip.ens-lyon.fr.Google Scholar
  15. 15.
    A. Mycroft. Abstract interpretation and optimizing transformations for applicative programs. PhD thesis, Computer Science Dept,. Univ. of Edinburgh, 1981.Google Scholar
  16. 16.
    D. B. Skillicorn. Architecture-independent parallel computation. IEEE Computer, pages 38–50, December 1990.Google Scholar
  17. 17.
    G. L. Steele Jr., and W. D. Hillis. Connection Machine Lisp: Fine-Grained Parallel Symbolic Processing. Technical Report PL86-2, Thinking Machine Corporation, Thinking Machines Corporation, 245 First Street, Cambridge, Massachusetts 02142, May 1986.Google Scholar
  18. 18.
    P. Wadler, and R. J. M. Hughes. Projections for strictness analysis. In G. Kahn, editor, Proc. Functional Programming Lang. and Computer Arch., pages 385–407, Berlin, Sept. 1987. Volume 274 of Lecture Notes in Comput. Sci., Springer-Verlag.Google Scholar
  19. 19.
    J. A. Yang and Y. Choo. Data fields as parallel programs. In Proceedings of the Second International Workshop on Array Structures, Montreal, Canada, June/July 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Björn Lisper
    • 1
  • Jean -Francois Collard
    • 2
  1. 1.Department of TeleinformaticsRoyal Institute of TechnologyKistaSweden
  2. 2.LIP, ENS LyonLyon Cedex 07France

Personalised recommendations