Can abstract interpretation become a mainstream compiler technology?
Abstract interpretation has enormous promise and yet remains at the margins of compiler practice. In this talk I will argue that abstract interpretation cannot become a mainstream compiler technlogy until its computational, algorithmic aspects are as well-developed as its mathematical, foundational aspects have been to date. I will put the problem into perspective by comparing abstract interpretation to dataflow analysis, for which a well-developed body of compuational methods exists. This comparison reveals that abstract interpretation is most appropriately seen as a method for specifying problems (that is, equations to be solved), and not as a method for specifying solutions (that is, algorithms for solving equations). In particular, efficient solution methods for the equations that arise from abstract interpretations are seldom obvious from the surface of the equations themselves. In other words, the “algorithms” that are strongly suggested by an abstract interpretation, in which the semantic domains are viewed as “data structures, the semantic functions as procedures”, and a simple fixed point engine used to integrate these parts into a workable whole, is naive and is at best suitable for use in prototyping program analyzers. This point of view calls into question the casual dismissal of abstract interpretation as inefficient, by questioning what can be inferred about the complexity of an abstract interpretation problem by superficial examination of the domains and semantic functions involved.