Language-Theoretic Abstraction Refinement

  • Zhenyue Long
  • Georgel Calin
  • Rupak Majumdar
  • Roland Meyer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7212)

Abstract

We give a language-theoretic counterexample-guided abstraction refinement (CEGAR) algorithm for the safety verification of recursive multi-threaded programs. First, we reduce safety verification to the (undecidable) language emptiness problem for the intersection of context-free languages. Initially, our CEGAR procedure overapproximates the intersection by a context-free language. If the overapproximation is empty, we declare the system safe. Otherwise, we compute a bounded language from the overapproximation and check emptiness for the intersection of the context free languages and the bounded language (which is decidable). If the intersection is non-empty, we report a bug. If empty, we refine the overapproximation by removing the bounded language and try again. The key idea of the CEGAR loop is the language-theoretic view: different strategies to get regular overapproximations and bounded approximations of the intersection give different implementations. We give concrete algorithms to approximate context-free languages using regular languages and to generate bounded languages representing a family of counterexamples. We have implemented our algorithms and provide an experimental comparison on various choices for the regular overapproximation and the bounded underapproximation.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Zhenyue Long
    • 1
    • 2
    • 3
  • Georgel Calin
    • 4
  • Rupak Majumdar
    • 1
  • Roland Meyer
    • 4
  1. 1.Max Planck Institute for Software SystemsGermany
  2. 2.State Key Laboratory of Computer Science, Institute of SoftwareChinese Academy of SciencesChina
  3. 3.Graduate University, Chinese Academy of SciencesChina
  4. 4.Department of Computer ScienceUniversity of KaiserslauternGermany

Personalised recommendations