CAV 2015: Computer Aided Verification pp 462-469

# Norn: An SMT Solver for String Constraints

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9206)

## Abstract

We present version 1.0 of the Norn SMT solver for string constraints. Norn is a solver for an expressive constraint language, including word equations, length constraints, and regular membership queries. As a feature distinguishing Norn from other SMT solvers, Norn is a decision procedure under the assumption of a set of acyclicity conditions on word equations, without any restrictions on the use of regular membership.

## Keywords

Inference Rule Regular Expression Satisfying Assignment Length Constraint String Variable
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## 1 Introduction

We introduce version 1.0 of the Norn SMT solver. Norn targets an expressive constraint language that includes word equations, length constraints, and regular membership queries. Norn is based on the calculus introduced in . This version adopts several improvements on the original version, which allow it to efficiently establish or refute the satisfiability of benchmarks that are out of the reach of existing state of the art solvers.

Norn aims to establish satisfiability of constraints written as Boolean combinations of: (i) word equations such as equalities $$(a\cdot u = v\cdot b)$$ or disequalities $$(a\cdot u\ne v\cdot b)$$, where ab are letters and uv are string variables denoting words of arbitrary lengths, (ii) length constraints such as $$(|u|=|v|+1)$$, where |u| refers to the length of the word denoted by string variable u, and (iii) predicates representing membership in regular expressions, e.g., $$u\in c\cdot (a+b)^*$$. The analysis is not trivial as it needs to capture subtle interactions between different types of predicates. The general decidability problem is still open. We guarantee termination of our procedure in case the considered initial constraints are acyclic. Acyclicity is a syntactic condition and it ensures that no variable appears more than once in word (dis)equalities during the analysis. This defines a fragment that is rich enough to capture all the practical examples we have encountered.

This version of the Norn solver follows a DPLL(T) architecture in order to turn the calculus introduced in  into an effective proof procedure, and introduces optimizations that are key to its current efficiency: an improved approach to handling disequalities, and a better strategy for splitting equalities compared to . Norn accepts SMT-LIB scripts as input, both in the format proposed in  and in the CVC4 dialect , and can handle the combination of string constraints and linear integer arithmetic. In addition, Norn contains a fixed-point engine for processing recursive programs in the form of Horn constraints, which are expressed as SMT-LIB scripts with uninterpreted predicates; the algorithm for solving such Horn constraints was introduced in [1, 9].

Related work. Over the last years, several SMT solvers for strings and related logics have been introduced. A number of tools handled strings by means of a translation to bit-vectors [5, 10, 11], thus assuming a fixed upper bound on the length of the possible words. More recently, DPLL(T)-based string solvers lift the restriction to strings of bounded length; this generation of solvers includes Z3-str , CVC4 , and S3 , which are all compared to Norn in Sect. 4. Most of those solvers are more restrictive than Norn in their support for language constraints. In our experience, such restrictions are particularly problematic for software model checking, where regular membership constraints offer an elegant and powerful way of expressing and synthesising program invariants. Another related technique are automata-based solvers for analyzing string-manipulated programs . According to  , automata-based solvers are faster than the SMT-based ones on checking single execution trace. On the other hand, Norns ability to derive loop invariants and to verify entire programs can allow it to conclude even in the presence of an infinite number of possible single executions. Automata-based solvers would need to provide widening operators to handle such cases.

## 2 Logic and Calculus

Our constraint language includes word equations, membership queries in regular languages and length and arithmetic inequalities. We assume a finite alphabet $$\Sigma$$ and write $$\Sigma ^*$$ to mean the set of finite words over $$\Sigma$$. We assume w.l.o.g. that each letter in our alphabet is represented by its unique Unicode character. We work with a set $$U$$ of string variables denoting words in $$\Sigma ^*$$ and write $$\mathbb {Z}$$ for the set of integer numbers.

Assume variables $$u, v \in U$$, integers $$k \in \mathbb {Z}$$, letters $$c \in \Sigma$$, and words $$w \in \Sigma ^*$$. We further write $$|t|$$ for length of word $$t$$. The syntax of the constraints is then given by:
A constraint is linear if no variable appears more than once in any of its (dis)equalities. We write $$w_t$$ to mean a word denoted by a term $$t$$. Semantics of the constraints are in .
Given a constraint $$\upphi$$ in our logic, we build a proof tree rooted at $$\upphi$$ by repeatedly applying inference rules. We assume here, without loss of generality, that $$\upphi$$ is given in Disjunctive Normal Form. An inference rule is of the form:
$$\textsc {Name}$$ is the name of the rule,  cond is a side condition on A for the application of the rule, $$B_1~B_2~...~B_n$$ are the premises, and A is the conclusion. Premises and conclusions are constraints. Each application consumes a conclusion and produces premises. In our calculus, if one of the produced premises turns out to be satisfiable, then $$\upphi$$ is also satisfiable. If none of the produced premises is satisfiable, then $$\upphi$$ is unsatisfiable. The inference rules are introduced in . The repeated application of the rules starting from a constraint $$\upphi$$ is guaranteed to terminate (i.e., giving a decision procedure) in case $$\upphi$$ is acyclic. Intuitively, acyclicity is a syntactic condition on the occurences of variables. This condition ensures all (dis)equalities are linear, whether in $$\upphi$$ or after the application of some inference rule. We describe one rule. Other rules are introduced in .
Rule $$\textsc {Eq-Var}$$ eliminates variable u from the equality $$u\cdot t_1=t_2\wedge \upphi$$. The equality is satisfied if a word $$w_u$$ coincides with the prefix of a word $$w_{t_2}$$. We assume $$u\cdot t_1=t_2\wedge \upphi$$ is linear (see  for the general case). There are two sets of premises. The first set corresponds to all the cases where $$w_u$$ coincides with a word $$w_{t_3}$$ where $$t_2$$ is the concatenation $$t_3\cdot t_4$$. The second set represents all situations where $$w_{t_3}$$ is a prefix of $$w_u$$ which is a prefix of $$w_{t_3\cdot v}$$ with $$t_2$$ being written as the concatenation $$t_3\cdot v\cdot t_4$$.

## 3 A DPLL(T)-Style Proof Procedure for Strings

We follow the classical DPLL(T)-architecture  to turn the calculus from Sect. 2 into an effective proof procedure. For a given (quantifier-free) formula in our logic, first a Boolean skeleton is computed, abstracting every atom to a Boolean variable. A SAT-solver is then used to check satisfiability of the Boolean skeleton, producing (in the positive case) an implicant of the skeleton; the implicant is subsequently translated back to a conjunction of string literals, and checked for satisfiability in the string logic.

Our theory solver for checking conjunctions of string literals implements the rules of Sect. 2 and Sect. 3.1, and handles all necessary splitting internally, i.e., without involving the SAT-solver. In our experience (which is consistent with observations in other domains, e.g., ), this approach makes it easier to integrate splitting heuristics, and often shows better performance in practice. In particular, our approach to split equalities is model-based and exploits information extracted from arithmetic constraints in order to prune the search space; the method is explained in Sect. 3.2.

Starting from a conjunction $$\upphi = (\upphi _= \wedge \upphi _{\not =} \wedge \upphi _\in \wedge \upphi _a)$$ of literals (which is here split into equalities $$\upphi _=$$, disequalities $$\upphi _{\not =}$$, membership constraints $$\upphi _\in$$, and arithmetic constraints $$\upphi _a$$) the theory solver performs depth-first exploration until either a proof branch is found that cannot be closed (and constitutes a model), or all branches have been closed and discharged. In the latter case, information about the string literals involved in showing unsatisfiability is propagated back to the SAT-solver as a blocking clause.

Rules are applied to $$\upphi = (\upphi _= \wedge \upphi _{\not =} \wedge \upphi _\in \wedge \upphi _a)$$ in the following order: (i) Satisfiability of $$\upphi _a$$ (in Presburger arithmetic) is checked, (ii) Compound disequalities in $$\upphi _{\not =}$$ are eliminated (Sect. 3.1), (iii) Equalities in $$\upphi _=$$ with complex left-hand side are split (Sect. 3.2), (iv) Membership constraints in $$\upphi _\in$$ with complex term are split, and (v) Satisfiability of all remaining membership literals and arithmetic constraints is checked.

### 3.1 Efficient Handling of Disequalities

To handle disequalities, we proceed differently than the method presented in . For each disequality of the form $$t\ne t'$$, the rule Diseq-Split produces only two premises. The first premise corresponds to the case where the words $$w_{t}$$ and $$w_{t'}$$ have different length. The second case is when $$w_{t}$$ and $$w_{t'}$$ have the same length but contain different letters $$c \ne c'$$ after a common prefix. Rather than constructing a premise for each pair of different letters (as it is done in ), we introduce two special variables $$\mu$$ and $$\mu '$$ (called witness variables) such that the letters c and $$c'$$ correspond to the words denoted by $$\mu$$ and $$\mu '$$. Therefore, the length of these witness variables is one and this fact is added to the arithmetic constraints. Furthermore, we add a disequality $$\mu \ne \mu '$$ in order to denote that c is different from $$c'$$. Assuming fresh variables u, v and $$v'$$, we rewrite $$t\ne t'$$ as two equalities $$t=u\cdot \mu \cdot v$$ and $$t'=u \cdot \mu '\cdot v'$$. Finally, w.l.o.g. we restrict the inference rules such that witness variables can only be substituted by other witness variables.
The new Rule Reg-Witness can only be applied to a witness variable $$\mu$$ in a certain case. For a formula $$\upphi$$, we define the condition $$\Theta (\upphi ,\mu )$$ to denote that $$\mu$$ appears in $$\upphi$$ only in disequalities. The Rule Reg-Witness replaces all membership predicates $$\{\mu \in R_i\}_{i=1}^n$$ with an arithmetic constraint $$\mathsf{Unicode}(R_1,R_2,\ldots ,R_n,\mu )$$. This constraint uses a fresh variable $$\mu _{uni}$$ s.t. the set of possible lengths of the word denoted by $$\mu _{uni}$$ represents the set of Unicode characters belonging to the intersection of all regular expressions $$\{R_i\}_{i=1}^n$$. In order to do so, we construct a finite state automaton A accepting the intersection of $$\{R_i\}_{i=1}^n$$. Furthermore, we restrict A to accept only words of size exactly one (since $$\mu$$ is a witness variable). The obtained automaton is then determined. Notice that the determined automaton B has only transitions from the initial state to the final one. Each transition of B is labelled by a Unicode character interval as specified by the automata library  we are using. Then, for each transition labeled by an interval of the form {min, ...,max}, we associate the arithmetic constraint $$min\le |\mu _{uni}|\le max$$. Finally, our arithmetic constraint $$\mathsf{Unicode}(R_1,R_2,\ldots ,R_n,\mu )$$ will be the disjunction of all associated arithmetic constraints to all the transitions of B. In the case that the intersection is empty, we set $$\mathsf{Unicode}(R_1,R_2,\ldots ,R_n,\mu )$$ to $$\mathtt false$$.
Finally, the Rule Diseq-Witness replaces a disequality of the form $$\mu \ne \mu '$$ by the arithmetic constraint $$|\mu _{uni}| \ne |\mu '_{uni}|$$.

### 3.2 Length-Guided Splitting of Equalities

The original calculus rule for handling complex equalities is Eq-Var, which systematically enumerates the different ways of matching up left-hand and right-hand side terms. For a practical proof procedure, naive use of this rule is sub-optimal in two respects: the number of cases to be considered grows quickly (in the worst case, exponentially in the number of equalities); and the rule does not provide any guidance on the order in which the cases should be considered, which can have dramatic impact on the performance for satisfiable problems. We found that both aspects can be improved by eagerly taking arithmetic constraints on the length of strings into account.

To present the approach, we assume that conjunctions $$\upphi = (\upphi _= \wedge \upphi _{\not =} \wedge \upphi _\in \wedge \upphi _a)$$ are continuously saturated by propagating length information from $$\upphi _=$$ to $$\upphi _a$$: for every equality $$s = t$$, a corresponding length equality $$|s| = |t|$$ is added, compound expressions $$|s \cdot t|$$ are rewritten to $$|s| + |t|$$, and the length |w| of concrete words $$w \in \Sigma ^*$$ is evaluated. In addition, for every variable v an inequality $$|v| \ge 0$$ is generated. Similar propagation is possible for membership constraints in $$\upphi _\in$$.

Prior to splitting equalities from $$\upphi _=$$, it is then possible to check the satisfiability of arithmetic constraints $$\upphi _a$$ (using any solver for Presburger arithmetic), and compute a satisfying assignment $$\upbeta$$. This assignment defines the length $$val_{\upbeta }(|v|)$$ of all string variables v, and thus uniquely determines how the right-hand side of an equality $$u \cdot t_1 = t_2$$ should be split into a prefix corresponding to u, and a suffix corresponding to $$t_1$$. We obtain the following modified splitting rule, which has the side condition that $$u \cdot t_1 = t_2 \cdot v \cdot t_3$$ is linear, and that a satisfying assignment $$\upbeta$$ of $$\upphi _a$$ exists such that $$val_{\upbeta }(|t_2|) \le val_{\upbeta }(|u|) \le val_{\upbeta }(|t_2\cdot v|)$$:
A similar rule is introduced to cover the situation that the right-hand side has to be split between two concrete letters, i.e., in case we have $$val_{\upbeta }(|u|) = val_{\upbeta }(|t_2|)$$ and $$val_{\upbeta }(|t_1|) = val_{\upbeta }(|t_3|)$$ for an equation $$u \cdot t_1 = t_2 \cdot t_3$$.

## 4 Implementation and Experiments

We compare the new version of Norn1 to other solvers on two sets of benchmarks. First, we use the well-known set of Kaluza benchmarks, which were translated to SMT-LIB by the authors of CVC4 . These benchmarks contain constraints generated by a Javascript analysis tool, and are mainly equational, with relatively little use of regular expressions. Results are given in Table 1, and show that currently Z3-str  performs best for this kind of benchmarks; however, Norn can solve 27 benchmarks that no other tool can handle (Table 2). S3  produced internal errors on a larger number of the Kaluza benchmarks, and sometimes results that were contradictory with the other solvers: for 95 problems, S3 claimed unsat, whereas Z3-str and CVC4 reported sat. For 27 of those, also Norn gave the answer sat. No contradictions were observed between CVC4, Z3-str, and Norn. A direct comparison with Norn 0.3  was not possible due to lacking support for SMT-LIB input. Instead, we internally modified Norn and reverted back to the old version of the calculus. The results indicate that our new rules significantly improve the performance specially on large benchmarks.

As a second set of benchmarks, we considered queries generated during CEGAR-based verification of string-processing programs ; those queries are quite small, but make heavy use of regular expressions and operators like the Kleene star. Norn could solve all of the benchmarks. We did not observe any major difference between the two versions of the calculus (runtimes are typically very small). Comparison with Z3-str was not possible, since the solver does not support regular expressions.2 CVC4 and S3 showed timeouts, ran out of memory, or crashed on a large number of the benchmarks. S3 and Norn gave contradicting answers in altogether 413 cases, with manual inspection indicating that the answers by Norn were correct. This was confirmed by the S3 authors, and will be fixed in the near future; a corrected version was not available by the deadline.
Table 1.

Experimental results. All experiments were done on an AMD Opteron 2220 SE machine, running 64-bit Linux and Java 1.8. Runtime was limited to 240 s (wall clock time), and heap space to 1.5 GB. CEGAR were benchmarks downsized from UTF16 when necessary.

Norn 1.0

Norn 0.3

CVC4 1.5pre

Z3-str 1.0.0

S3

Kaluza

(Sat)

33 072

31 018

33 772

34 770

30 925

(Unsat)

11 595

11 256

11 625

11 799

11 408

(Unknown)

2 617

5 010

1 887

715

3 081

(Crash)

0

0

0

0

1 870

CEGAR

(sat)

712

712

292

307

(Unsat)

315

315

98

530

(Unknown)

0

0

637

158

(Crash/OOM)

0

0

0

32

Table 2.

Complementarity of the results: number of problems for which one tool can show sat/unsat, whereas another tool times out or crashes. For instance, Norn can prove satisfiability of 435 Kaluza benchmarks on which CVC4 times out.

Norn 1.0

CVC4

Z3-str

S3

Sat

Unsat

Sat

Unsat

Sat

Unsat

Sat

Unsat

Norn

(Kaluza)

+1 135

+57

+1 698

+231

+64

+125

(CEGAR)

0

0

0

0

CVC4

(Kaluza)

+435

+27

+998

+174

0

0

(CEGAR)

+420

+217

+124

+398

Z3-str

(Kaluza)

0

+27

0

0

0

0

(CEGAR)

S3

(Kaluza)

+2 184

+339

+2 752

+312

+3 750

+486

(CEGAR)

+134

+56

+57

+18

## Footnotes

1. 1.

Tool and benchmarks are available on http://user.it.uu.se/%7Ejarst116/norn/.

2. 2.

The recently released Z3-str2 supports regular expressions, but in a format different from all other compared tools, so that experiments could not be carried out by the deadline.

## References

1. 1.
Abdulla, P.A., Atig, M.F., Chen, Y.-F., Hol\’ık, L., Rezine, A., Rümmer, P., Stenman, J.: String Constraints for Verification. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 150–166. Springer, Heidelberg (2014) Google Scholar
2. 2.
Bjorner, N., Ganesh, V., Michel, R., Veanes, M.: Smt-lib sequences and regular expressions. In: Fontaine, P., Goel, A. (eds.) SMT 2012. EPiC Series, vol. 20, pp. 77–87. EasyChair (2013)Google Scholar
3. 3.
Griggio, A.: A practical approach to satisfiability modulo linear integer arithmetic. JSAT 8(1/2), 1–27 (2012)
4. 4.
Kausler, S., Sherman, E.: Evaluation of string constraint solvers in the context of symbolic execution. In: Crnkovic, I., Chechik, M., Grünbacher, P. (eds.) ACM/IEEE International Conference on Automated Software Engineering, ASE’2014, Vasteras, Sweden - 15–19 Sept 2014. pp. 259–270. ACM (2014). http://doi.acm.org/10.1145/2642937.2643003
5. 5.
Kiezun, A., Ganesh, V., Guo, P.J., Hooimeijer, P., Ernst, M.D.: HAMPI: A Solver for String Constraints. In: ISTA. pp. 105–116. ACM (2009)Google Scholar
6. 6.
Liang, T., Reynolds, A., Tinelli, C., Barrett, C., Deters, M.: A DPLL(T) theory solver for a theory of strings and regular expressions. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 646–662. Springer, Heidelberg (2014) Google Scholar
7. 7.
Møller, A.: dk.brics.automaton - finite-state automata and regular expressions for Java (2010). http://www.brics.dk/automaton/
8. 8.
Nieuwenhuis, R., Oliveras, A., Tinelli, C.: Solving SAT and SAT modulo theories: from an abstract Davis-Putnam-Logemann-Loveland procedure to DPLL(T). J. ACM 53(6), 937–977 (2006)
9. 9.
Rümmer, P., Hojjat, H., Kuncak, V.: Disjunctive interpolants for horn-clause verification. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 347–363. Springer, Heidelberg (2013)
10. 10.
Saxena, P., Akhawe, D., Hanna, S., Mao, F., McCamant, S., Song, D.: A symbolic execution framework for JavaScript. In: IEEE Symposium on Security and Privacy. pp. 513–528. IEEE Computer Society (2010)Google Scholar
11. 11.
Saxena, P., Hanna, S., Poosankam, P., Song, D.: FLAX: Systematic discovery of client-side validation vulnerabilities in rich web applications. In: NDSS. The Internet Society (2010)Google Scholar
12. 12.
Trinh, M.T., Chu, D.H., Jaffar, J.: S3: A symbolic string solver for vulnerability detection in web applications. In: Ahn, G., Yung, M., Li, N. (eds.) CCS. pp. 1232–1243. ACM (2014)Google Scholar
13. 13.
Yu, F., Alkhalaf, M., Bultan, T.: Stranger: an automata-based string analysis tool for PHP. In: Esparza, J., Majumdar, R. (eds.) TACAS 2010. LNCS, vol. 6015, pp. 154–157. Springer, Heidelberg (2010)
14. 14.
Zheng, Y., Zhang, X., Ganesh, V.: Z3-str: A Z3-based string solver for web application analysis. In: Meyer, B., Baresi, L., Mezini, M. (eds.) ESEC/FSE. pp. 114–124. ACM (2013)Google Scholar

© Springer International Publishing Switzerland 2015

## Authors and Affiliations

• Parosh Aziz Abdulla
• 1
• Mohamed Faouzi Atig
• 1
• Yu-Fang Chen
• 2
• Lukáš Holík
• 3
• Ahmed Rezine
• 4
• Philipp Rümmer
• 1
• Jari Stenman
• 1
Email author
1. 1.Department of Information TechnologyUppsala UniversityUppsalaSweden
2. 2.Institute of Information ScienceAcademia SinicaTaipeiTaiwan
3. 3.Faculty of Information TechnologyBrno University of TechnologyBrnoCzech Republic