# Transitive closure of infinite graphs and its applications

## Abstract

Integer tuple relations can concisely summarize many types of information gathered from analysis of scientific codes. For example they can be used to precisely describe which iterations of a statement are data dependent of which other iterations. It is generally not possible to represent these tuple relations by enumerating the related pairs of tuples. For example, it is impossible to enumerate the related pairs of tuples in the relation {[i] → [i+2] ¦ 1 < -i ≤ n − 2 }. Even when it is possible to enumerate the related pairs of tuples, such as for the relation {[i, j] → [i′, j′] ¦ 1 ≤ i,j,i′,j′ ≤ 100 }, it is often not practical to do so. We instead use a closed form description by specifying a predicate consisting of affine constraints on the related pairs of tuples. As we just saw, these affine constraints can be parameterized, so what we are really describing are infinite families of relations (or graphs). Many of our applications of tuple relations rely heavily on an operation called transitive closure. Computing the transitive closure of these “infinite graphs” is very different from the traditional problem of computing the transitive closure of a graph whose edges can be enumerated. For example, the transitive closure of the first relation above is the relation {[ti] → [i′] ¦ ∃β s.t. i′ − ti = 2β ∧ 1 ≤ i ≤ i′ ≤ n }. As we will prove, transitive closure is not computable in the general case. We have developed algorithms that produce exact results in most commonly occurring cases and produce upper or lower bounds (as necessary) in the other cases. This paper will describe our algorithms for computing transitive closure and some of its applications such as determining which interprocessor synchronizations are redundant.

## Keywords

Transitive Closure Iteration Space Nest Loop Related Pair Loop Body## Preview

Unable to display preview. Download preview PDF.

## References

- 1.Ding-Kai Chen.
*Compiler Optimizations for Parallel Loops With Fine-Grained Synchronization*. PhD thesis, Dept. of Computer Science, U. of Illinois at Urbana-Champaign, 1994. Also available as CSRD Report 1374.Google Scholar - 2.Wayne Kelly, Vadim Maslov, William Pugh, Evan Rosser, Tatiana Shpeisman, and David Wonnacott. The Omega Library interface guide. Technical Report CS-TR-3445, Dept. of Computer Science, University of Maryland, College Park, March 1995.Google Scholar
- 3.Wayne Kelly and William Pugh. A framework for unifying reordering transformations. Technical Report CS-TR-3193, Dept. of Computer Science, University of Maryland, College Park, April 1993.Google Scholar
- 4.Wayne Kelly and William Pugh. Finding legal reordering transformations using mappings. In
*Lecture Notes in Computer Science 892: Seventh International Workshop on Languages and Compilers for Parallel Computing*, Ithaca, NY, August 1994. Springer-Verlag.Google Scholar - 5.Wayne Kelly and William Pugh. A unifying framework for iteration reordering transformations. In
*Proceedings of the IEEE First International Conference on Algorithms And Architectures for Parallel Processing*, Brisbane, Australia, April 1995.Google Scholar - 6.V.P. Krothapalli and P. Sadayappan. Removal of redundant dependences in DOACROSS loops with constant dependences. In
*Proc. of the 3rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming*, pages 51–60, July 1991.Google Scholar - 7.S.P. Midkiff and D.A. Padua. Compiler algorithm for synchronization.
*IEEE Trans. on Computers*, C-36(12):1485–1495, 1987.Google Scholar - 8.S.P. Midkiff and D.A. Padua. A comparison of four synchronization optimization techniques. In
*Proc. 1991 IEEE International Conf. on Parallel Processing*, pages II-9–II-16, August 1991.Google Scholar