Abstract
Brzozowski introduced the notion of derivatives for regular expressions. They can be used for a very simple regular expression matching algorithm. Sulzmann and Lu cleverly extended this algorithm in order to deal with POSIX matching, which is the underlying disambiguation strategy for regular expressions needed in lexers. Their algorithm generates POSIX values which encode the information of how a regular expression matches a string—that is, which part of the string is matched by which part of the regular expression. In this paper we give our inductive definition of what a POSIX value is and show that Sulzmann and Lu’s algorithm always generates such a value. We also show that our inductive definition of a POSIX value is equivalent to an alternative definition by Okui and Suzuki which identifies POSIX values as least elements according to an ordering of values.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Brzozowski [4] introduced the notion of the derivative of a regular expression w.r.t. a character , and showed that it gave a simple solution to the problem of matching a string with a regular expression : if the derivative of w.r.t. (in succession) all the characters of the string matches the empty string, then matches (and vice versa). The derivative has the property (which may almost be regarded as its specification) that, for every string and regular expression and character , one has if and only if . The beauty of Brzozowski’s derivatives is that they are neatly expressible in any functional programming language, and easily definable and reasoned about in theorem provers—the definitions just consist of inductive datatypes and simple recursive functions. A mechanised correctness proof of Brzozowski’s matcher in for example HOL4 has been mentioned by Owens and Slind [18]. Another one in Isabelle/HOL is part of the work by Krauss and Nipkow [11]. And another one in Coq is given by Coquand and Siles [5]. Also Ribeiro and Du Bois give one in Agda [20].
If a regular expression matches a string, then in general there is more than one way of how the string is matched. There are two commonly used disambiguation strategies to generate a unique answer: one is called GREEDY matching [8] and the other is POSIX matching [12, 16, 21, 23, 24].Footnote 1 For example consider the string and the regular expression . Either the string can be matched in two ‘iterations’ by the single letter-regular expressions and , or directly in one iteration by . The first case corresponds to GREEDY matching, which first matches with the left-most symbol and only matches the next symbol in case of a mismatch (this is greedy in the sense of preferring instant gratification to delayed repletion). The second case is POSIX matching, which prefers the longest match.
In the context of lexing, where an input string needs to be split up into a sequence of tokens, POSIX is the more natural disambiguation strategy for what programmers consider basic syntactic building blocks in their programs. These building blocks are often specified by some regular expressions, say and for recognising keywords and identifiers, respectively. There are a few underlying (informal) rules behind tokenising a string in a POSIX [23] fashion:
-
The Longest Match Rule (or “Maximal Munch Rule”) The longest initial substring matched by any regular expression is taken as next token.
-
Priority rule For a particular longest initial substring, the first (leftmost) regular expression that can match determines the token.
-
Star rule A subexpression repeated by \({}^\star \) shall not match an empty string unless this is the only match for the repetition.
-
Empty string rule An empty string shall be considered to be longer than no match at all.
Consider for example the regular expression for recognising keywords such as , , and so on; and for recognising identifiers (say, a single character followed by characters or numbers). Then we can form the regular expression and use POSIX matching to tokenise strings, say and . For we obtain by the Longest Match Rule a single identifier token, not a keyword followed by an identifier. For we obtain by the Priority Rule a keyword token, not an identifier token—even if matches also. By the Star Rule we know matches , respectively , in exactly one ‘iteration’ of the star. The Empty String Rule is for cases where, for example, the regular expression matches against the string . Then the longest initial matched substring is the empty string, which is matched by both the whole regular expression and the parenthesised subexpression.
One limitation of Brzozowski’s matcher is that it only generates a YES/NO answer for whether a string is being matched by a regular expression. Sulzmann and Lu [21] extended this matcher to allow generation not just of a YES/NO answer but of an actual matching, called a lexical value. Assuming a regular expression matches a string, values encode the information of how the string is matched by the regular expression—that is, which part of the string is matched by which part of the regular expression. For this consider again the string and the regular expression (this time fully parenthesised). We can view this regular expression as a tree and if the string is matched by two Star ‘iterations’, then the is matched by the left-most alternative in this tree and the by the right-left alternative. This suggests to record this matching as
where , , and are constructors for values. records how many iterations were used; , respectively , which alternative is used. The value for matching in a single ‘iteration’, i.e. the POSIX value, would look as follows
where has only a single-element list for the single iteration and indicates that is matched by a sequence regular expression. This ‘tree view’ leads naturally to the idea that regular expressions act as types and values as inhabiting those types (see, for example, [10, 14]).
Sulzmann and Lu give a simple algorithm to calculate a value that appears to be the value associated with POSIX matching. The challenge then is to specify that value, in an algorithm-independent fashion, and to show that Sulzmann and Lu’s derivative-based algorithm does indeed calculate a value that is correct according to the specification. The answer given by Sulzmann and Lu [21] is to define a relation (called an “order relation”) on the set of values of , and to show that (once a string to be matched is chosen) there is a maximum element and that it is computed by their derivative-based algorithm. This proof idea is inspired by work of Frisch and Cardelli [8] on a GREEDY regular expression matching algorithm. However, we were not able to establish transitivity and totality for the “order relation” by Sulzmann and Lu. There are some inherent problems with their approach (of which some of the proofs are not published in [21]); perhaps more importantly, we give in this paper a simple inductive (and algorithm-independent) definition of what we call being a POSIX value for a regular expression and a string ; we show that the algorithm by Sulzmann and Lu computes such a value and that such a value is unique. Our proofs are both done by hand and checked in Isabelle/HOL. The experience of doing our proofs has been that this mechanical checking was absolutely essential: this subject area has hidden snares. This was also noted by Kuklewicz [12] who found that nearly all POSIX matching implementations are “buggy” [21, p. 203] and by Grathwohl et al. [9, p. 36] who wrote:
“The POSIX strategy is more complicated than the greedy because of the dependence on information about the length of matched strings in the various subexpressions.”
Contributions: We have implemented in Isabelle/HOL the derivative-based regular expression matching algorithm of Sulzmann and Lu [21]. We have proved the correctness of this algorithm according to our specification of what a POSIX value is (inspired by work of Vansummeren [24]). Sulzmann and Lu sketch in [21] an informal correctness proof: but to us it contains unfillable gaps.Footnote 2 Our specification of a POSIX value consists of a simple inductive definition that given a string and a regular expression uniquely determines this value. We also show that our definition is equivalent to an ordering of values based on positions by Okui and Suzuki [16].
2 Preliminaries
Strings in Isabelle/HOL are lists of characters with the empty string being represented by the empty list, written , and list-cons being written as . Often we use the usual bracket notation for lists also for strings; for example a string consisting of just a single character is written We use the usual definitions for prefixes and strict prefixes of strings. By using the type for characters we have a supply of finitely many characters roughly corresponding to the ASCII character set. Regular expressions are defined as usual as the elements of the following inductive datatype:
where stands for the regular expression that does not match any string, for the regular expression that matches only the empty string and for matching a character literal. We use and \(\cdot \) for alternative and sequence regular expressions, respectively. We are adding here to the usual regular expressions also the regular expression for character sets, written where is a set of characters. Such character sets can of course be represented by using alternatives (and if the set is empty) and therefore do not add anything new in terms of recognised languages. We include them here because they will show later on that the generality of a definition is required once such simple regular expressions are added.
The language of a regular expression is defined as usual by the recursive function with the seven clauses:
In clause (4) we use the operation for the concatenation of two languages (it is also list-append for strings). We use the star-notation for regular expressions and for languages (in the clause (6) above). The star for languages is defined inductively by two clauses: the empty string being in the star of a language and if is in a language and in the star of this language, then also is in the star of this language. It will also be convenient to use the following notion of a semantic derivative (or left quotient) of a language defined as
For semantic derivatives we have the following equations (for example mechanically proved in [11]):
Brzozowski’s derivatives of regular expressions [4] can be easily defined by two recursive functions: the first is from regular expressions to booleans (implementing a test when a regular expression can match the empty string), and the second takes a regular expression and a character to a (derivative) regular expression:
We may extend this definition to give derivatives w.r.t. strings:
Given the equations in (1), it is a relatively easy exercise in mechanical reasoning to establish that
Proposition 1
With this in place it is also very routine to prove that the regular expression matcher defined as
gives a positive answer if and only if . Consequently, this regular expression matching algorithm satisfies the usual specification for regular expression matching. While the matcher above calculates a provably correct YES/NO answer for whether a regular expression matches a string or not, the novel idea of Sulzmann and Lu [21] is to append another phase to this algorithm in order to calculate a lexical value. We will explain the details next.
3 POSIX Regular Expression Matching
There have been many previous works that use values for encoding how a regular expression matches a string. The clever idea by Sulzmann and Lu [21] is to define a function on values that mirrors (but inverts) the construction of the derivative on regular expressions. Values are defined as the inductive datatype
where we use to stand for a list of values. (This is similar to the approach taken by Frisch and Cardelli for GREEDY matching [8], and Sulzmann and Lu for POSIX matching [21]). The string underlying a value can be calculated by the function, written and defined as:
We will sometimes refer to the underlying string of a value as flattened value. We will also overload our notation and use for flattening a list of values and concatenating the resulting strings.
Sulzmann and Lu follow Nielsen and Henglein and define inductively an inhabitation relation that associates values to regular expressions (see [14, 21]). We define this relation as followsFootnote 3
where in the clause for we use the notation for indicating that is a member in the list . We require in this rule that every value in flattens to a non-empty string. The idea is that -values satisfy the informal Star Rule (see Introduction) where the \(^\star \) does not match the empty string unless this is the only match for the repetition. Note also that no values are associated with the regular expression (since it does not match any string), and that the only value associated with the regular expression is . We use the -value for both, single character regular expressions and character sets. It is routine to establish how values “inhabiting” a regular expression correspond to the language of a regular expression, namely
Proposition 2
Given a regular expression and a string , we define the set of all Lexical Values inhabited by with the underlying string being Footnote 4
The main property of is that it is always finite.
Proposition 3
This finiteness property does not hold in general if we remove the side-condition about in the -rule above. For example using Sulzmann and Lu’s less restrictive definition, would contain infinitely many values, but according to our more restricted definition only a single value, namely .
If a regular expression matches a string , then generally the set is not just a singleton set. In case of POSIX matching the problem is to calculate the unique lexical value that satisfies the (informal) POSIX rules from the Introduction. Graphically the POSIX value calculation algorithm by Sulzmann and Lu can be illustrated by the picture in Fig. 1 where the path from the left to the right involving is the first phase of the algorithm (calculating successive Brzozowski’s derivatives) and , the path from right to left, the second phase. This picture shows the steps required when a regular expression, say , matches the string . We first build the three derivatives (according to , and ). We then use to find out whether the resulting derivative regular expression can match the empty string. If yes, we call the function that produces a value for how can match the empty string (taking into account the POSIX constraints in case there are several ways). This function is defined by the clauses:
Note that this function needs only to be partially defined, namely only for regular expressions that are nullable. In case fails, the string cannot be matched by and the null value is returned by the algorithm. Note also how this function makes some subtle choices leading to a POSIX value: for example if an alternative regular expression, say , can match the empty string and furthermore can match the empty string, then we return a -value. The -value will only be returned if cannot match the empty string.
The most interesting idea from Sulzmann and Lu [21] is the construction of a value for how can match the string from the value how the last derivative, in Fig. 1, can match the empty string. Sulzmann and Lu achieve this by stepwise “injecting back” the characters into the values thus inverting the operation of building derivatives, but on the level of values. The corresponding function, called , takes three arguments, a regular expression, a character and a value. For example in the first (or right-most) -step in Fig. 1 the regular expression , the character from the last derivative step and , which is the value corresponding to the derivative regular expression . The result is the new value . The final result of the algorithm is the value . The function is defined by recursion on regular expressions and by analysing the shape of values (corresponding to the derivative regular expressions).
To better understand what is going on in this definition it might be instructive to look first at the three sequence cases (clauses (5)–(7)). In each case we need to construct an “injected value” for . Because of the ‘shape’ of the regular expression, this must be a value of the form . Recall the clause of the -function for sequence regular expressions:
Consider first the -branch where the derivative is . The corresponding value must therefore be of the form , which matches the left-hand side in clause (5) of . In the -branch the derivative is an alternative, namely . This means we either have to consider a - or -value. In case of the -value we know further it must be a value for a sequence regular expression. Therefore the pattern we match in the clause (6) is , while in (7) it is just . One more interesting point is in the right-hand side of clause (7): since in this case the regular expression does not “contribute” to matching the string, that means it only matches the empty string, we need to call in order to construct a value for how can match this empty string. A similar argument applies for why we can expect in the left-hand side of clause (8) that the value is of the form —the derivative of a star is . Finally, the reason for why we can ignore the first argument in clause (1) of is that it will only ever be called in cases where , but the usual linearity restrictions in patterns do not allow us to build this constraint explicitly into our function definition.Footnote 5 Similarly the clause in (2) will only be called in cases where holds. Notable in this clause, however, is the fact that we cannot ignore the second argument of the injection function (the character that is injected into the value), because otherwise there is no way to determine which character from the character set should be injected.
The idea of the -function to “inject” a character, say , into a value can be made precise by the first part of the following lemma, which shows that the underlying string of an injected value has a prepended character ; the second part shows that the underlying string of an -value is always the empty string (given the regular expression is nullable since otherwise might not be defined).
Lemma 4
Proof
Both properties are by routine inductions: the first one can, for example, be proved by induction over the definition of ; the second by an induction on . There are no interesting cases.
Having defined the and function we can extend Brzozowski’s matcher so that a value is constructed (assuming the regular expression matches the string). The clauses of the Sulzmann and Lu lexer are
If the regular expression does not match the string, is returned. If the regular expression does match the string, then value is returned. One important virtue of this algorithm is that it can be implemented with ease in any functional programming language and also in Isabelle/HOL. In the remaining part of this section we prove that this algorithm is correct.
The well-known idea of POSIX matching is informally defined by some rules such as the Longest Match and Priority Rules (see Introduction); as correctly argued in [21], this needs formal specification. Sulzmann and Lu define an “ordering relation” between values and argue that there is a maximum value, as given by the derivative-based algorithm. In contrast, we shall introduce a simple inductive definition that specifies directly what a POSIX value is, incorporating the POSIX-specific choices into the side-conditions of our rules. Our definition is inspired by the matching relation given by Vansummeren [24]. The relation we define is ternary and written as , relating strings, regular expressions and values; the inductive rules are given in Fig. 2. We can prove that given a string and regular expression , the POSIX value is uniquely determined by .
Theorem 5
Proof
Both by induction on the definition of . The second part follows by a case analysis of and the first part.
We claim that our relation captures the idea behind the four informal POSIX rules shown in the Introduction: Consider for example the rules and where the POSIX value for a string and an alternative regular expression, that is , is specified—it is always a -value, except when the string to be matched is not in the language of ; only then it is a -value (see the side-condition in ). Interesting is also the rule for sequence regular expressions (). The first two premises state that and are the POSIX values for and respectively. Consider now the third premise and note that the POSIX value of this rule should match the string . According to the Longest Match Rule, we want that the is the longest initial split of such that is still recognised by . Let us assume, contrary to the third premise, that there exist an and such that can be split up into a non-empty string and a possibly empty string . Moreover the longer string can be matched by and the shorter can still be matched by . In this case would not be the longest initial split of and therefore cannot be a POSIX value for . The main point is that our side-condition ensures the Longest Match Rule is satisfied.
A similar condition is imposed on the POSIX value in the -rule. Also there we want that is the longest initial split of and furthermore the corresponding value cannot be flattened to the empty string. In effect, we require that in each “iteration” of the star, some non-empty substring needs to be “chipped” away; only in case of the empty string we accept as the POSIX value. Indeed we can show that our POSIX values are lexical values which exclude those that contain subvalues that flatten to the empty string.
Lemma 6
Proof
By routine induction on .
Next is the lemma that shows the function calculates the POSIX value for the empty string and a nullable regular expression.
Lemma 7
Proof
By routine induction on .
The central lemma for our POSIX relation is that the -function preserves POSIX values.
Lemma 8
Proof
By induction on . We explain two cases.
-
Case . There are two subcases, namely and ; and and . In we know , from which we can infer by induction hypothesis and hence as needed. Similarly in subcase where, however, in addition we have to use Proposition 1(2) in order to infer .
-
Case . There are three subcases:
-
and
-
and
-
and
For we know and as well as
From the latter we can infer by Proposition 1(2):
We can use the induction hypothesis for to obtain . Putting this all together allows us to infer . The case is similar.
For we know and . From the former we have by induction hypothesis for . From the latter we can infer
By Lemma 7 we know holds. Putting this all together, we can conclude with , as required. Finally suppose . This case is very similar to the sequence case, except that we need to also ensure that . This follows from (which in turn follows from and the induction hypothesis).
-
With Lemma 8 in place, it is completely routine to establish that the Sulzmann and Lu lexer satisfies our specification (returning the null value iff the string is not in the language of the regular expression, and returning a unique POSIX value iff the string is in the language):
Theorem 9
Proof
By induction on using Lemmas 7 and 8.
In (2) we further know by Theorem 5 that the value returned by the lexer must be unique. A simple corollary of our two theorems therefore is:
Corollary 10
This concludes our correctness proof. Note that we have not changed the algorithm of Sulzmann and Lu,Footnote 6 but introduced our own specification for what a correct result—a POSIX value—should be. In the next section we show that our specification coincides with another one given by Okui and Suzuki using a different technique.
4 Ordering of Values According to Okui and Suzuki
While in the previous section we have defined POSIX values directly in terms of a ternary relation (see inference rules in Fig. 2), Sulzmann and Lu took a different approach in [21]: they introduced an ordering for values and identified POSIX values as the maximal elements. An extended version of [21] is available at the website of its first author; this includes more details of their proofs, but which are evidently not in final form yet. Unfortunately, we were not able to verify claims that their ordering has properties such as being transitive or having maximal elements.
Okui and Suzuki [16, 17] described another ordering of values, which they use to establish the correctness of their automata-based algorithm for POSIX matching. Their ordering resembles some aspects of the one given by Sulzmann and Lu, but overall is quite different. To begin with, Okui and Suzuki identify POSIX values as minimal, rather than maximal, elements in their ordering. A more substantial difference is that the ordering by Okui and Suzuki uses positions in order to identify and compare subvalues. Positions are lists of natural numbers. This allows them to quite naturally formalise the Longest Match and Priority rules of the informal POSIX standard. Consider for example the value
At position of this value is the subvalue and at position the subvalue . At the ‘root’ position, or empty list , is the whole value . Positions such as or are outside of . If it exists, the subvalue of at a position , written , can be recursively defined by
In the last clause we use Isabelle’s notation for the th element in a list. The set of positions inside a value , written , is given by
whereby in the last clause stands for the length of a list. Clearly for every position inside a value there exists a subvalue at that position.
To help understanding the ordering of Okui and Suzuki, consider again the earlier value and compare it with the following :
Both values match the string , that means if we flatten these values at their respective root position, we obtain . However, at position , matches whereas matches only the shorter . So according to the Longest Match Rule, we should prefer , rather than as POSIX value for string (and corresponding regular expression). In order to formalise this idea, Okui and Suzuki introduce a measure for subvalues at position , called the norm of at position . We can define this measure in Isabelle as an integer as follows
where we take the length of the flattened value at position , provided the position is inside ; if not, then the norm is . The default for outside positions is crucial for the POSIX requirement of preferring a -value over a -value (if they can match the same string—see the Priority Rule from the Introduction). For this consider
Both values match . At position the norm of is (the subvalue matches ), but the norm of is (the position is outside according to how we defined the ‘inside’ positions of - and -values). Of course at position , the norms are reversed, but the point is that subvalues will be analysed according to lexicographically ordered positions. According to this ordering, the position takes precedence over and thus also will be preferred over . The lexicographic ordering of positions, written , can be conveniently formalised by three inference rules
With the norm and lexicographic order in place, we can state the key definition of Okui and Suzuki [16]: a value is smaller at position than , written , if and only if (i) the norm at position is greater in (that is the string is longer than ) and (ii) all subvalues at positions that are inside or and that are lexicographically smaller than p, we have the same norm, namely
The position in this definition acts as the first distinct position of and , where both values match strings of different length [16]. Since at the values and match different strings, the ordering is irreflexive. Derived from the definition above are the following two orderings:
While we encountered a number of obstacles for establishing properties like transitivity for the ordering of Sulzmann and Lu (and which we failed to overcome), it is relatively straightforward to establish this property for the orderings by Okui and Suzuki.
Lemma 11
(Transitivity)
Proof
From the assumption we obtain two positions and , where the values and (respectively and ) are ‘distinct’. Since is trichotomous, we need to consider three cases, namely and . Let us look at the first case. Clearly and imply . It remains to show that for a with that holds. Suppose , then we can infer from the first assumption that . But this means that must be in too (the norm cannot be given ). Hence we can use the second assumption and infer , which concludes this case with . The reasoning in the other cases is similar.
The proof for \(\preccurlyeq \) is similar and omitted. It is also straightforward to show that and \(\preccurlyeq \) are partial orders, and is well-founded over lexical values of a given regular expression and given string. Okui and Suzuki furthermore show that they are linear orderings for lexical values [16], but we have not formalised this in Isabelle. It is not essential for our results. What we are going to show below is that for a given and , the orderings have a unique minimal element on the set , which is the POSIX value we defined in the previous section. We start with two properties that show how the length of a flattened value relates to the -ordering.
Proposition 12
Both properties follow from the definition of the ordering. Note that (2) entails that a value, say , whose underlying string is a strict prefix of another flattened value, say must be smaller than . For our proofs it will be useful to have the following properties—in each case the underlying strings of the compared values are the same:
Proposition 13
One might prefer that statements (4) and (5) (respectively (6) and (7)) are combined into a single iff-statement (like the ones for and ). Unfortunately this cannot be done easily: such a single statement would require an additional assumption about the two values and being inhabited by the same regular expression. The complexity of the proofs involved seems to not justify such a ‘cleaner’ single statement. The statements given are just the properties that allow us to establish our theorems without any difficulty. The proofs for Proposition 13 are routine.
Next we establish how Okui and Suzuki’s orderings relate to our definition of POSIX values. Given a value for and , then any other lexical value is greater or equal than , namely:
Theorem 14
Proof
By induction on our POSIX rules. By Theorem 5 and the definition of , it is clear that and have the same underlying string . The three base cases are straightforward: for example for , we have that must also be of the form . Therefore we have . The inductive cases for being of the form and are as follows:
-
Case : In this case the value is either of the form or . In the latter case we can immediately conclude with since a -value with the same underlying string is always smaller than a -value by Proposition 13(1). In the former case we have and can use the induction hypothesis to infer . Because and have the same underlying string , we can conclude with using Proposition 13(2).
-
Case with : This case similar to the previous case, except that we additionally know . This is needed when is of the form . Since and , we can derive a contradiction for using Proposition 2. So also in this case .
-
Case with : We can assume with and . We have . By the side-condition of the -rule we know that either or that is a strict prefix of . In the latter case we can infer by Proposition 12(2) and from this by Proposition 13(5) (as noted above and must have the same underlying string). In the former case we know and . With this we can use the induction hypotheses to infer and . By Proposition 13(4,5) we can again infer .
The case for is similar to the -case and omitted. \(\square \)
This theorem shows that our value for a regular expression and string is in fact a minimal element of the values in . By Proposition 12(2) we also know that any value in , with being a strict prefix, cannot be smaller than . The next theorem shows the opposite—namely any minimal element in must be a value. This can be established by induction on , but the proof can be drastically simplified by using the fact from the previous section about the existence of a value whenever a string .
Theorem 15
Proof
If then by Proposition 2. Hence by Theorem 9(2) there exists a value with and by Lemma 6 we also have . By Theorem 14 we therefore have . If then we are done. Otherwise we have , which however contradicts the second assumption about being the smallest element in . So we are done in this case too. \(\square \)
From this we can also show that if is non-empty (or equivalently ) then it has a unique minimal element:
Corollary 16
To sum up, we have shown that the (unique) minimal elements of the ordering by Okui and Suzuki are exactly the values we defined inductively in Sect. 3. This provides an independent confirmation that our ternary relation formalises the informal POSIX rules.
5 Optimisations
Derivatives as calculated by Brzozowski’s method are usually more complex regular expressions than the initial one; the result is that the derivative-based matching and lexing algorithms are often abysmally slow. However, various optimisations are possible, such as the simplifications of to . These simplifications can speed up the algorithms considerably, as noted in [21]. One of the advantages of having a simple specification and correctness proof is that the latter can be refined to prove the correctness of such simplification steps. While the simplification of regular expressions according to rules like
is well understood, there is an obstacle with the POSIX value calculation algorithm by Sulzmann and Lu: if we build a derivative regular expression and then simplify it, we will calculate a POSIX value for this simplified derivative regular expression, not for the original (unsimplified) derivative regular expression. Sulzmann and Lu [21] overcome this obstacle by not just calculating a simplified regular expression, but also calculating a rectification function that “repairs” the incorrect value.
The rectification functions can be (slightly clumsily) implemented in Isabelle/HOL as follows using some auxiliary functions:
The functions and encode the simplification rules in (3) and compose the rectification functions (simplifications can occur deep inside the regular expression). The main simplification function is then
where stands for the identity function. The function returns a simplified regular expression and a corresponding rectification function. Note that we do not simplify under stars: doing so seems to slow down the algorithm, rather than speed it up. The optimised lexer is then given by the clauses:
In the second clause we first calculate the derivative and then simplify the result. This gives us a simplified derivative and a rectification function . The lexer is then recursively called with the simplified derivative, but before we inject the character into the value , we need to rectify (that is construct ). Before we can establish the correctness of , we need to show that simplification preserves the language and simplification preserves our POSIX relation once the value is rectified (recall generates a (regular expression, rectification function) pair):
Lemma 17
Proof
Both are by induction on . There is no interesting case for the first statement. For the second statement, of interest are the and cases. In each case we have to analyse four subcases whether and equals ). For example for , consider the subcase and . By assumption we know . From this we can infer and by IH also (*) . Given we know . By the first statement is the empty set, meaning (**) . Taking (*) and (**) together gives by the -rule . In turn this gives as we need to show. The other cases are similar. \(\square \)
We can now prove relatively straightforwardly that the optimised lexer produces the expected result:
Theorem 18
Proof
By induction on generalising over . The case is trivial. For the cons-case suppose the string is of the form . By induction hypothesis we know holds for all (in particular for being the derivative ). Let be the simplified derivative regular expression, that is , and be the rectification function, that is . We distinguish the cases whether (*) or not. In the first case we have by Theorem 9(2) a value so that and hold. By Lemma 17(1) we can also infer from (*) that holds. Hence we know by Theorem 9(2) that there exists a with and . From the latter we know by Lemma 17(2) that holds. By the uniqueness of the POSIX relation (Theorem 5) we can infer that is equal to —that is the rectification function applied to produces the original . Now the case follows by the definitions of and .
In the second case where we have that by Theorem 9(1). We also know by Lemma 17(1) that . Hence by Theorem 9(1) and by IH then also . With this we can conclude in this case too.
6 Extensions
A strong point in favour of Sulzmann and Lu’s algorithm is that it can be extended in various ways. If we are interested in tokenising a string, then we need to not just split up the string into tokens, but also “classify” the tokens (for example whether they are keywords or identifiers and so on). This can be done with only minor modifications to the algorithm by introducing record regular expressions and record values (for example [22]):
where is a label, say a string, a regular expression and a value. All functions can be smoothly extended to these regular expressions and values. For example is nullable iff is, and so on. The purpose of the record regular expression is to mark certain parts of a regular expression and then record in the calculated value which parts of the string were matched by this part. The label can then serve as classification for the tokens. For this recall the regular expression for keywords and identifiers from the Introduction. With the record regular expression we can form and then traverse the calculated value and only collect the underlying strings in record values. With this we obtain finite sequences of pairs of labels and strings, for example
from which tokens with classifications (keyword-token, identifier-token and so on) can be extracted.
In the context of POSIX matching, it is also interesting to study additional constructors about bounded-repetitions of regular expressions. For this let us extend the results from the previous sections to the following four additional regular expression constructors:
We will call them bounded regular expressions. They can be used to specify how many times a regular expression should match. With the help of the power operator (definition omitted) for sets of strings, the languages recognised by these regular expression can be defined in Isabelle as follows:
This definition implies that in the last clause matches no string in case , because then the interval is empty.
While the language recognised by these regular expressions is straightforward, some care is needed for how to define the corresponding lexical values. First, with a slight abuse of language, we will (re)use values of the form for values inhabited in bounded regular expressions. Second, we need to introduce inductive the rules for extending our inhabitation relation shown in (2), from which we then derived our notion of lexical values. Given the rule for just requires additionally that the length of the list of values must be smaller or equal to , that is:
Like in the -rule, we require with the left-premise that some non-empty part of the string is ‘chipped’ away by every value in , that means the corresponding values do not flatten to the empty string.
In the rule for (that is exactly--times ) we will require that the length of the list of values equals to . But enforcing in this case that every of these values ‘chips’ away some part of a string would be too strong. Therefore matters are bit more complicated in the rule for \(r^{\{n\}}\). According to the informal POSIX rules we have to allow that there is an “initial segment” that needs to chip away some parts of the string, but if this segment is too short for satisfying the exactly-n-times constraint, it can be followed by a segment where every value flattens to the empty string. One way for expressing this constraint in Isabelle is by the rule:
The is the initial segment with non-empty flattened values, whereas is the segment where all values flatten to the empty string. This idea gets even more complicated for the \(r^{\{n..\}}\) regular expression. The reason is that we need to distinguish the case where we use fewer repetitions than n. In this case we need to “fill” the end with values that match the empty string to obtain at least n repetitions. But in case we need more than n repetitions, then all values should match a non-empty string. This leads to two inhabitation rules for \(r^{\{n..\}}\):
Note that these two rules “collapse” in case \(n=0\) to just the single rule given for \(r^*\) in the definition shown in (2). We have similar rules for the between-nm-times operator (omitted). These rules ensure that our definition for sets of lexical values \(LV\,r\,s\) are still finite and also fits with the ordering given by Okui and Suzuki (which require minimal values over the sets \(LV\,r\,s\)).
Fortunately, the other definitions extend “smoother” to bounded repetitions. For example the rules for derivatives are:
For mkeps we need to generate the shortest list of values we can get “away with” given the boundedness constraints. This means for example in the case \(r^{\{..n\}}\) we can return the empty list, like for stars. In the other cases we have to generate a list of exactly n copies of the mkeps-value, because n is the smallest number of repetitions required.
In this definition we use Isabelle’s \(\textit{replicate}\)-function in order to generate a list of n copies of a value. The injection function also extends straightforwardly to the bounded regular expressions as follows:
Similarly our POSIX definition can be easily extended to the additional constructors. For example for \(r^{\{n\}}\) we have two rules:
The first rule deals with the case when an empty string needs to be recognised. The second when the string is non-empty. In this case the “initial segment” must match non-empty strings only. The idea behind this formulation is to avoid situations where an earlier value matches the empty string, while it is actually possible to “nibble away” some parts of the string. The rules for the other bounded regular expressions are similar. We shall omit them here. With these definitions in place, our proofs given in the previous sections extend to the bounded repetitions. The main point is that there are no surprises.
What is good about our re-use of the -constructor for the values of bounded regular expressions is that we did not need to make any changes to the ordering definitions by Okui and Suzuki. It still holds that our POSIX values are the minimal elements for the lexical value sets, and vice versa. In this way we again obtain independent assurance that our definitions capture correctly the idea behind POSIX matching.
Unfortunately, in our formal proofs in Isabelle/HOL we need to give the definitions and proofs all over again in a separate theory, since there is no way of making Isabelle to accept proofs for the basic regular expressions (defined as inductive datatype) and then augmenting the datatype with new constructors. This would be a really “cool” feature for Isabelle, but we have no idea how this could be achieved elegantly.
7 Conclusion
We have implemented in Isabelle/HOL the POSIX value calculation algorithm introduced by Sulzmann and Lu [21]. Our implementation is nearly identical to the original and all modifications we introduced are harmless (like our char-clause for ). We have proved this algorithm to be correct, but correct according to our own specification of what POSIX values are. Our specification (inspired from work by Vansummeren [24]) appears to be much simpler than in [21] and our proofs are nearly always straightforward. We have attempted to formalise the original proof by Sulzmann and Lu [21], but we believe it contains unfillable gaps. In the online version of [21], the authors already acknowledge some small problems, but our experience suggests that there are more serious problems. We also showed that our definition for POSIX values is equivalent to a definition of POSIX values given by Okui and Suzuki [16]. They use a different technique for identifying POSIX values. This equivalence gives additional weight to our claim that our rules capture the informal ideas for POSIX lexing given in [23].
Having proved the correctness of the POSIX lexing algorithm in [21], which lessons have we learned? Well, this is a perfect example for the importance of the right definitions. We have (on and off) explored mechanisations as soon as first versions of [21] appeared, but have made little progress with turning the relatively detailed proof sketch in [21] into a formaliseable proof. Having seen [24] and adapted the POSIX definition given there for the algorithm by Sulzmann and Lu made all the difference: the proofs, as said, are nearly straightforward. The question remains whether the original proof idea of [21], potentially using our result as a stepping stone, can be made to work? Alas, we really do not know despite considerable effort.
In the context of formalising lexers in theorem provers, closely related to our work is an automata-based lexer formalised in Isabelle/HOL by Nipkow [15]. This lexer also splits up strings into longest initial substrings, but Nipkow’s algorithm is not completely computational. The algorithm by Sulzmann and Lu, in contrast, can be implemented with ease in any functional language. A bespoke and executable lexer for the Imp-language is formalised in Coq as part of the Software Foundations book by Pierce et al. [19]. The disadvantage of such bespoke lexers is that they do not generalise easily to more advanced features. Asperti et al. [1] formalise in the Matita theorem prover the notion of pointed regular expressions in order to elegantly generate DFAs for regular expression matching. While this work focuses on lexing using automata, we find most interesting the connection between the pointed regular expressions and Brzozowski derivatives. This might open up further work on calculating derivatives efficiently. We leave this to future work.
Most closely related to our work is the work by Egolf et al. [6] on the Verbatim lexer, which is formalised in Coq. They have a similar inductive relation for specifying what POSIX matching means. What is good about their approach is that they calculate tokens directly; we in contrast have to use the record regular expression for this. Our approach is slightly more general, but this generality might not be wanted in typical applications. The authors of Verbatim report a good running time with one set of lexing rules, but it is unlikely that this good running time applies universally to all regular expressions and all strings. The reason is that they do not simplify derivatives, which means their sizes can explode. This is the main difference between their work and our work where we have shown that simplification rules do not affect the correctness of the algorithm by Sulzmann and Lu. A simple example where simplification makes a difference is the regular expression whose derivatives can grow beyond any finite bound given long enough strings composed of ’s. In contrast, all derivatives of this regular expression stay below the size of 8 if they are simplified after each step. This is important because functions like nullable and derivative need to traverse regular expressions—if the size of derivatives is too large, then these functions will be slow, abysmally slow that is. There is also work by the same authors on Verbatim++, which is an improvement of the Verbatim lexer (using for example memoization) [7]. However, this work has a different focus than ours: their work uses derivatives in order to generate DFAs which are then used for lexing. While this might make the process of lexing faster for the “basic” regular expressions, classic DFAs have problems with bounded regular expressions. For them one has to connect many copies of DFAs, which increases their size and thus slows down the lexing process. As has been shown, derivatives can easily accomodate the bounded regular expressions without the need of having to make copies.
Most recently the work by Moseley et al [24] has been included in the .NET7 regular expression library. They impressively extend Brzozowski derivatives to various anchors (like start-of-line or end-of-string) and lookarounds (like what is coming before or after a matched string). The latter has also been studied by Miyazaki and Minamide [25]. Moseley et al already mention a difference between their work and the work described here, namely that properties like \(L(r_1 \cdot r_2) = L(r_1) @ L(r_2)\) do not hold anymore when anchors are added. It remains to be seen how our work can be adapted to such a setting. Another difference between their work and ours is that POSIX lexing is an inherently asymmetric problem, in the sense that it generates longest submatches (recall the Longest Match Rule from the Introduction). This is important for their matching algorithm where they define a reverse operator for regular expressions, POSIX Lexing with Derivatives of Regular Expressions written \({ r}^{{ r}}\), such that the following property holds:
This means the language of the reverse regular expression is the set of reversed strings of L(r). This property is useful for finding substring matches as it allows Moseley et al to first find the end-location where a substring matches a regular expression and then use \(\_^r\) in order to find the beginning of the matched substring. The problem with POSIX lexing is that one cannot use the POSIX value for \(r^r\) and a string rev(s) in order to generate the POSIX value for r and s. We leave a full investigation of what we can adopt from their work to future work.
Our formalisation is available from the Archive of Formal Proofs [3] under http://www.isa-afp.org/entries/Posix-Lexing.shtml.
Notes
POSIX matching acquired its name from the fact that the corresponding rules were described as part of the POSIX specification for Unix-like operating systems [23].
An extended version of [21] is available at the website of its first author; this extended version already includes remarks in the appendix that their informal proof contains gaps, and possible fixes are not fully worked out.
Note that the rule for differs from our earlier paper [2]. There we used the original definition by Sulzmann and Lu which does not require that values flatten to a non-empty string (see also [14]). Our reason for introducing the more restricted version of lexical values is that the resulting set is always finite, given an r and s, which provides more convenience later on when reasoning about an ordering relation for values.
Sulzmann and Lu state this clause as , but our deviation is harmless.
All deviations we introduced are harmless.
References
Asperti, A., Coen, C.S., Tassi, E.: Regular expressions. AU point (2010). arXiv:1010.2604
Ausaf, F., Dyckhoff, R., Urban, C.: POSIX lexing with derivatives of regular expressions (proof pearl). In: Proc. of the 7th International Conference on Interactive Theorem Proving (ITP). LNCS, vol. 9807, pp. 69–86 (2016)
Ausaf, F., Dyckhoff, R., Urban, C.: POSIX lexing with derivatives of regular expressions. Archive of Formal Proofs (2016). http://www.isa-afp.org/entries/Posix-Lexing.shtml
Brzozowski, J.A.: Derivatives of regular expressions. J. ACM 11(4), 481–494 (1964)
Coquand, T., Siles, V.: A decision procedure for regular expression equivalence in type theory. In: Proc. of the 1st International Conference on Certified Programs and Proofs (CPP). LNCS, vol. 7086, pp. 119–134 (2011)
Egolf, D., Lasser, S., Fisher, K.: Verbatim: a verified lexer generator. In: 2021 IEEE Security and Privacy Workshops (SPW), pp. 92–100 (2021)
Egolf, D., Lasser, S., Fisher, K.: Verbatim++: verified, optimized, and semantically rich lexing with derivatives. In: Proc. of the 11th ACM SIGPLAN Conference on Certified Programs and Proofs (CPP). ACM, pp. 27–39 (2022)
Frisch, A., Cardelli, L.: Greedy regular expression matching. In: Proc. of the 31st International Conference on Automata, Languages and Programming (ICALP). LNCS, vol. 3142, pp. 618–629 (2004)
Grathwohl, N.B.B., Henglein, F., Rasmussen, U.T.: A crash-course in regular expression parsing and regular expressions as types. Technical report, University of Copenhagen (2014)
Hosoya, H., Vouillon, J., Pierce, B.C.: Regular expression types for XML. ACM Trans. Program. Lang. Syst. (TOPLAS) 27(1), 46–90 (2005)
Krauss, A., Nipkow, T.: Proof pearl: regular expression equivalence and relation algebra. J. Autom. Reason. 49, 95–106 (2012)
Kuklewicz, C.: Regex Posix. https://wiki.haskell.org/Regex_Posix
Moseley, D., Nishio, M., Perez Rodriguez, J., Saarikivi, O., Toub, S., Veanes, M., Wan, T., Xu, E. Derivative Based Nonbacktracking Real-World Regex Matching with Backtracking Semantics. In: Proc. 44th ACM SIGPLAN Conference on Programming Language Design and Implementation PLDI. (2023) (To appear)
Nielsen, L., Henglein, F.: Bit-coded regular expression parsing. In: Proc. of the 5th International Conference on Language and Automata Theory and Applications (LATA). LNCS, vol. 6638, pp. 402–413 (2011)
Nipkow, T.: Verified lexical analysis. In: Proc. of the 11th International Conference on Theorem Proving in Higher Order Logics (TPHOLs). LNCS, vol. 1479, pp. 1–15 (1998)
Okui, S., Suzuki, T.: Disambiguation in regular expression matching via position automata with augmented transitions. In: Proc. of the 15th international conference on implementation and application of automata (CIAA). LNCS, vol. 6482, pp. 231–240 (2010)
Okui, S., Suzuki, T.: Disambiguation in Regular Expression Matching via Position Automata with Augmented Transitions. Technical report, University of Aizu (2013)
Owens, S., Slind, K.: Adapting functional programs to higher order logic. Higher-Order Symb. Comput. 21(4), 377–409 (2008)
Pierce, B.C., Casinghino, C., Gaboardi, M., Greenberg, M., Hriţcu, C., Sjöberg, V., Yorgey, B.: Software foundations. Electronic textbook (2015). http://www.cis.upenn.edu/~bcpierce/sf
Ribeiro, R., Bois, A.D.: Certified bit-coded regular expression parsing. In: Proc. of the 21st Brazilian Symposium on Programming Languages. Association for Computing Machinery, New York, NY, USA (2017)
Sulzmann, M., Lu, K.: POSIX regular expression parsing with derivatives. In: Proc. of the 12th International Conference on Functional and Logic Programming (FLOPS). LNCS, vol. 8475, pp. 203–220 (2014)
Sulzmann, M., van Steenhoven, P.: A flexible and efficient ML lexer tool based on extended regular expression submatching. In: Proc. of the 23rd International Conference on Compiler Construction (CC). LNCS, vol. 8409, pp. 174–191 (2014)
The Open Group Base Specification Issue 6 IEEE Std 1003.1 2004 Edition. http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html (2004)
Vansummeren, S.: Type inference for unique pattern matching. ACM Trans. Program. Lang. Syst. 28(3), 389–428 (2006)
Acknowledgements
I am very grateful to Martin Sulzmann for his comments on this work and moreover for patiently explaining the details in [4]. I am also very grateful to Fahad Ausaf who helped with this work and to Flavio Melinte Citea who pointed out omissions in the simplification rules. I am deeply saddened that Roy Dyckhoff, co-author of the original conference paper [1] and the supervisor of my master thesis in St Andrews, died in August 2018. This was the last scientific paper he worked on. Roy was a witty, extremely intelligent and a very pleasant researcher and friend. He is much missed by me and many colleagues.
Funding
No funding was received for this work.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
There is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This paper is a revised and expanded version of [2]. Compared with that paper we give a second definition for POSIX values introduced by Okui and Suzuki [16, 17] and prove that it is equivalent to our original one. This second definition is based on an ordering of values and very similar to, but not equivalent with, the definition given by Sulzmann and Lu [21]. The advantage of the definition based on the ordering is that it implements more directly the informal rules from the POSIX standard. Furthermore we extend our results to bounded repetitions of regular expressions and character sets.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Urban, C. POSIX Lexing with Derivatives of Regular Expressions. J Autom Reasoning 67, 24 (2023). https://doi.org/10.1007/s10817-023-09667-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10817-023-09667-1