Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

figure a

1 Introduction

The recent years have seen a boom in the number of security incidents in computer networks. In order to alleviate the impact of network attacks and intrusions, Internet providers want to detect malicious traffic at their network’s entry points and on the backbones between sub-networks. Software-based network intrusion detection systems (NIDSes), such as the popular open-source system Snort [1], are capable of detecting suspicious network traffic by testing (among others) whether a packet payload matches a regular expression (regex) describing known patterns of malicious traffic. NIDSes collect and maintain vast databases of such regexes that are typically divided into groups according to types of the attacks and target protocols.

Regex matching is the most computationally demanding task of a NIDS as its cost grows with the speed of the network traffic as well as with the number and complexity of the regexes being matched. The current software-based NIDSes cannot perform the regex matching on networks beyond 1 Gbps [2, 3], so they cannot handle the current speed of backbone networks ranging between tens and hundreds of Gbps. A promising approach to speed up NIDSes is to (partially) offload regex matching into hardware [3,4,5]. The hardware then serves as a pre-filter of the network traffic, discarding the majority of the packets from further processing. Such pre-filtering can easily reduce the traffic the NIDS needs to handle by two or three orders of magnitude [3].

Field-programmable gate arrays (FPGAs) are the leading technology in high-throughput regex matching. Due to their inherent parallelism, FPGAs provide an efficient way of implementing nondeterministic finite automata (NFAs), which naturally arise from the input regexes. Although the amount of available resources in FPGAs is continually increasing, the speed of networks grows even faster. Working with multi-gigabit networks requires the hardware to use many parallel packet processing branches in a single FPGA [5]; each of them implementing a separate copy of the concerned NFA, and so reducing the size of the NFAs is of the utmost importance. Various language-preserving automata reduction approaches exist, mainly based on computing (bi)simulation relations on automata states (cf. the related work). The reductions they offer, however, do not satisfy the needs of high-speed hardware-accelerated NIDSes.

Our answer to the problem is approximate reduction of NFAs, allowing for a trade-off between the achieved reduction and the precision of the regex matching. To formalise the intuitive notion of precision, we propose a novel probabilistic distance of automata. It captures the probability that a packet of the input network traffic is incorrectly accepted or rejected by the approximated NFA. The distance assumes a probabilistic model of the network traffic (we show later how such a model can be obtained).

Having formalised the notion of precision, we specify the target of our reductions as two variants of an optimization problem: (1) minimizing the NFA size given the maximum allowed error (distance from the original), or (2) minimizing the error given the maximum allowed NFA size. Finding such optimal approximations is, however, computationally hard (PSPACE-complete, the same as precise NFA minimization).

Consequently, we sacrifice the optimality and, motivated by the typical structure of NFAs that emerge from a set of regexes used by NIDSes (a union of many long “tentacles” with occasional small strongly-connected components), we limit the space of possible reductions by restricting the set of operations they can apply to the original automaton. Namely, we consider two reduction operations: (i) collapsing the future of a state into a self-loop (this reduction over-approximates the language), or (ii) removing states (such a reduction is under-approximating).

The problem of identifying the optimal sets of states on which these operations should be applied is still PSPACE-complete. The restricted problem is, however, more amenable to an approximation by a greedy algorithm. The algorithm applies the reductions state-by-state in an order determined by a precomputed error labelling of the states. The process is stopped once the given optimization goal in terms of the size or error is reached. The labelling is based on the probability of packets that may be accepted through a given state and hence over-approximates the error that may be caused by applying the reduction at a given state. As our experiments show, this approach can give us high-quality reductions while ensuring formal error bounds.

Finally, it turns out that even the pre-computation of the error labelling of the states is costly (again PSPACE-complete). Therefore, we propose several ways to cheaply over-approximate it such that the strong error bound guarantees are still preserved. Particularly, we are able to exploit the typical structure of the “union of tentacles” of the hardware NFA in an algorithm that is exponential in the size of the largest “tentacle” only, which is indeed much faster in practice.

We have implemented our approach and evaluated it on regexes used to classify malicious traffic in Snort. We obtain quite encouraging experimental results demonstrating that our approach provides a much better reduction than language-preserving techniques with an almost negligible error. In particular, our experiments, going down to the level of an actual implementation of NFAs in FPGAs, confirm that we can squeeze into an up-to-date FPGA chip real-life regexes encoding malicious traffic, allowing them to be used with a negligible error for filtering at speeds of 100 Gbps (and even 400 Gbps). This is far beyond what one can achieve with current exact reduction approaches.

Related Work. Hardware acceleration for regex matching at the line rate is an intensively studied technology that uses general-purpose hardware [6,7,8,9,10,11,12,13,14] as well as FPGAs [3,4,5, 15,16,17,18,19,20]. Most of the works focus on DFA implementation and optimization techniques. NFAs can be exponentially smaller than DFAs but need, in the worst case, \(\mathcal {O}(n)\) memory accesses to process each byte of the payload where n is the number of states. In most cases, this incurs an unacceptable slowdown. Several works alleviate this disadvantage of NFAs by exploiting reconfigurability and fine-grained parallelism of FPGAs, allowing one to process one character per clock cycle (e.g. [3,4,5, 15, 16, 19, 20]).

In [14], which is probably the closest work to ours, the authors consider a set of regexes describing network attacks. They replace a potentially prohibitively large DFA by a tree of smaller DFAs, an alternative to using NFAs that minimizes the latency occurring in a non-FPGA-based implementation. The language of every DFA-node in the tree over-approximates the languages of its children. Packets are filtered through the tree from the root downwards until they belong to the language of the encountered nodes, and may be finally accepted at the leaves, or are rejected otherwise. The over-approximating DFAs are constructed using a similar notion of probability of an occurrence of a state as in our approach. The main differences from our work are that (1) the approach targets approximation of DFAs (not NFAs), (2) the over-approximation is based on a given traffic sample only (it cannot benefit from a probabilistic model), and (3) no probabilistic guarantees on the approximation error are provided.

Approximation of DFAs was considered in various other contexts. Hyper-minimization is an approach that is allowed to alter language membership of a finite set of words [21, 22]. A DFA with a given maximum number of states is constructed in [23], minimizing the error defined either by (i) counting prefixes of misjudged words up to some length, or (ii) the sum of the probabilities of the misjudged words wrt the Poisson distribution over \(\varSigma ^*\). Neither of these approaches considers reduction of NFAs nor allows to control the expected error with respect to the real traffic.

In addition to the metrics mentioned above when discussing the works [21,22,23], the following metrics should also be mentioned. The Cesaro-Jaccard distance studied in [24] is, in spirit, similar to [23] and does also not reflect the probability of individual words. The edit distance of weighted automata from [25] depends on the minimum edit distance between pairs of words from the two compared languages, again regardless of their statistical significance. None of these notions is suitable for our needs.

Language-preserving minimization of NFAs is a PSPACE-complete problem [26, 38]. More feasible (polynomial-time) is language-preserving size reduction of NFAs based on (bi)simulations [27,28,29,30], which does not aim for a truly minimal NFA. A number of advanced variants exist, based on multi-pebble or look-ahead simulations, or on combinations of forward and backward simulations [31,32,33]. The practical efficiency of these techniques is, however, often insufficient to allow them to handle the large NFAs that occur in practice and/or they do not manage to reduce the NFAs enough. Finally, even a minimal NFA for the given set of regexes is often too big to be implemented in the given FPGA operating on the required speed (as shown even in our experiments). Our approach is capable of a much better reduction for the price of a small change of the accepted language.

2 Preliminaries

We use \(\langle a, b\rangle \) to denote the set \(\{x \in \mathbb {R}\mid a \le x \le b\}\) and \(\mathbb {N}\) to denote the set \(\{ 0,1,2,\dots \}\). Given a pair of sets \(X_1\) and \(X_2\), we use \(X_1 \mathop {\bigtriangleup }X_2\) to denote their symmetric difference, i.e., the set \(\{x \mid \exists ! i \in \{1,2\} : x \in X_i\}\). We use the notation \([v_1, \ldots , v_n]\) to denote a vector of n elements, \(\varvec{1}\) to denote the all 1’s vector \([1, \ldots , 1]\), \(\varvec{A}\) to denote a matrix, and \(\varvec{A}^\top \) for its transpose, and \(\varvec{I}\) for the identity matrix.

In the following, we fix a finite non-empty alphabet \(\varSigma \). A nondeterministic finite automaton (NFA) is a quadruple \(A = (Q, \delta , I, F)\) where Q is a finite set of states, \(\delta : Q\times \varSigma \rightarrow 2^Q\) is a transition function, \(I \subseteq Q\) is a set of initial states, and \(F\subseteq Q\) is a set of accepting states. We use \(Q[A], \delta [A], I[A]\), and \(F[A]\) to denote \(Q, \delta , I\), and F, respectively, and \(q \xrightarrow {a} q'\) to denote that \(q' \in \delta (q, a)\). A sequence of states \(\rho = q_0 \cdots q_n\) is a run of A over a word \(w = a_1 \cdots a_n \in \varSigma ^*\) from a state q to a state \(q'\), denoted as \(q {\mathop {\leadsto }\limits ^{w, \rho }} q'\), if \(\forall 1 \le i \le n: q_{i-1} \xrightarrow {a_i} q_i\), \(q_0 = q\), and \(q_n = q'\). Sometimes, we use \(\rho \) in set operations where it behaves as the set of states it contains. We also use \(q {\mathop {\leadsto }\limits ^{w}} q'\) to denote that \(\exists \rho \in Q^*: q {\mathop {\leadsto }\limits ^{w,\rho }} q'\) and \(q \leadsto q'\) to denote that \(\exists w: q {\mathop {\leadsto }\limits ^{w}} q'\). The language of a state q is defined as \(L_{A}(q) = \{w \mid \exists q_F \in F: q {\mathop {\leadsto }\limits ^{w}} q_F\}\) and its banguage (back-language) is defined as \(L^{\flat }_{A}(q) = \{w \mid \exists q_I \in I: q_I {\mathop {\leadsto }\limits ^{w}} q\}\). Both notions can be naturally extended to a set \(S \subseteq Q\): \(L_{A}(S) = \bigcup _{q\in S} L_{A}(q)\) and \(L^{\flat }_{A}(S) = \bigcup _{q\in S} L^{\flat }_{A}(q)\). We drop the subscript A when the context is obvious. A accepts the language \(L(A)\) defined as \(L(A) = L_{A}(I)\). A is called deterministic (DFA) if \(|I| = 1\) and \(\forall q\in Q\) and \(\forall a\in \varSigma :\) \(|\delta (q,a)|\le 1\), and unambiguous (UFA) if \(\forall w \in L(A): \exists ! q_I \in I, \rho \in Q^*, q_F \in F: q_I {\mathop {\leadsto }\limits ^{w, \rho }} q_F\).

The restriction of A to \(S \subseteq Q\) is an NFA \(A_{|S}\) given as \(A_{|S} = (S, \delta \cap (S \times \varSigma \times 2^S), I \cap S, F \cap S)\). We define the trim operation as \( trim (A) = A_{|C}\) where \(C = \{ q \mid \exists q_I\in I, q_F\in F: q_I \leadsto q \leadsto q_F \}\). For a set of states \(R \subseteq Q\), we use \( reach (R)\) to denote the set of states reachable from R, formally, \( reach (R) = \{r' \mid \exists r \in R: r \leadsto r'\}\). We use the number of states as the measurement of the size of A, i.e., \(|A| = |Q|\).

A (discrete probability) distribution over a set X is a mapping \(\mathrm {Pr}: X \rightarrow \langle 0, 1\rangle \) such that \(\sum _{x \in X} \mathrm {Pr}(x) = 1\). An n-state probabilistic automaton (PA) over \(\varSigma \) is a triple \(P = (\varvec{\alpha }, \varvec{\gamma }, \{\varvec{\varDelta }_a\}_{a \in \varSigma })\) where \(\varvec{\alpha }\in \langle 0, 1\rangle ^n\) is a vector of initial weights, \(\varvec{\gamma }\in \langle 0, 1\rangle ^n\) is a vector of final weights, and for every \(a \in \varSigma \), \(\varvec{\varDelta }_a\in \langle 0, 1\rangle ^{n\times n}\) is a transition matrix for symbol a. We abuse notation and use \(Q[P]\) to denote the set of states \(Q[P] = \{1, \ldots , n\}\). Moreover, the following two properties need to hold: (i) \(\sum \{\varvec{\alpha }[i] \mid i \in Q[P]\} = 1\) (the initial probability is 1) and (ii) for every state \(i \in Q[P]\) it holds that \(\sum \{\varvec{\varDelta }_a[i, j] \mid j\in Q[P], a\in \varSigma \} + \varvec{\gamma }[i] = 1\) (the probability of accepting or leaving a state is 1). We define the support of P as the NFA \( supp (P) = (Q[P], \delta [P], I[P], F[P])\) s.t.

$$\begin{aligned} \delta [P]&= \{(i, a, j) \mid \varvec{\varDelta }_a[i, j]> 0\}&I[P]&= \{i \mid \varvec{\alpha }[i]> 0\}&F[P]&= \{i \mid \varvec{\gamma }[i] > 0\}. \end{aligned}$$

Let us assume that every PA P is such that \( supp (P) = trim ( supp (P))\). For a word \(w = a_1 \ldots a_k \in \varSigma ^*\), we use \(\varvec{\varDelta }_w\) to denote the matrix \(\varvec{\varDelta }_{a_1} \cdots \varvec{\varDelta }_{a_k}\). It can be easily shown that P represents a distribution over words \(w \in \varSigma ^*\) defined as \(\mathrm {Pr}_P(w) = \varvec{\alpha }^\top \cdot \varvec{\varDelta }_w \cdot \varvec{\gamma }\). We call \(\mathrm {Pr}_P(w)\) the probability of w in P. Given a language \(L \subseteq \varSigma ^*\), we define the probability of L in P as \(\mathrm {Pr}_P(L) = \sum _{w \in L} \mathrm {Pr}_P(w)\).

If Conditions (i) and (ii) from the definition of PAs are dropped, we speak about a pseudo-probabilistic automaton (PPA), which may assign a word from its support a quantity that is not necessarily in the range \(\langle 0, 1\rangle \), denoted as the significance of the word below. PPAs may arise during some of our operations performed on PAs.

3 Approximate Reduction of NFAs

In this section, we first introduce the key notion of our approach: a probabilistic distance of a pair of finite automata wrt a given probabilistic automaton that, intuitively, represents the significance of particular words. We discuss the complexity of computing the probabilistic distance. Finally, we formulate two problems of approximate automata reduction via probabilistic distance. Proofs of the lemmas can be found in [43].

3.1 Probabilistic Distance

We start by defining our notion of a probabilistic distance of two NFAs. Assume NFAs \(A_1\) and \(A_2\) and a probabilistic automaton P specifying the distribution \(\mathrm {Pr}_P: \varSigma ^* \rightarrow \langle 0, 1\rangle \). The probabilistic distance \(d_P(A_1,A_2)\) between \(A_1\) and \(A_2\) wrt \(\mathrm {Pr}_P\) is defined as

$$\begin{aligned} d_P(A_1,A_2) = \mathrm {Pr}_P(L(A_1) \mathop {\bigtriangleup }L(A_2)). \end{aligned}$$

Intuitively, the distance captures the significance of the words accepted by one of the automata only. We use the distance to drive the reduction process towards automata with small errors and to assess the quality of the resulting automata.

The value of \(\mathrm {Pr}_P(L(A_1) \mathop {\bigtriangleup }L(A_2))\) can be computed as follows. Using the fact that (1) \(L_1 \mathop {\bigtriangleup }L_2 = (L_1 \setminus L_2) \ \mathop {\uplus }\ (L_2 \setminus L_1)\) and (2) \(L_1 \setminus L_2 = L_1 \setminus (L_1 \cap L_2)\), we get

$$\begin{aligned} \begin{aligned} d_P(A_1,A_2)&= \mathrm {Pr}_P(L(A_1) \setminus L(A_2)) + \mathrm {Pr}_P(L(A_2) \setminus L(A_1)) \\&= \mathrm {Pr}_P(L(A_1) \setminus (L(A_1)\cap L(A_2))) + \mathrm {Pr}_P(L(A_2) \setminus (L(A_2)\cap L(A_1))) \\&= \mathrm {Pr}_P(L(A_1)) + \mathrm {Pr}_P(L(A_2)) - 2\cdot \mathrm {Pr}_P(L(A_1) \cap L(A_2)). \end{aligned} \end{aligned}$$

Hence, the key step is to compute \(\mathrm {Pr}_P(L(A))\) for an NFA A and a PA P. Problems similar to computing such a probability have been extensively studied in several contexts including verification of probabilistic systems [34,35,36]. The below lemma summarises the complexity of this step.

Lemma 1

Let P be a PA and A an NFA. The problem of computing \(\mathrm {Pr}_P(L(A))\) is PSPACE-complete. For a UFA A, \(\mathrm {Pr}_P(L(A))\) can be computed in PTIME.

In our approach, we apply the method of [36] and compute \(\mathrm {Pr}_P(L(A))\) in the following way. We first check whether the NFA A is unambiguous. This can be done by using the standard product construction (denoted as \(\cap \)) for computing the intersection of the NFA A with itself and trimming the result, formally \(B = trim (A \cap A)\), followed by a check whether there is some state \((p,q)\in Q[B]\) s.t. \(p \ne q\) [37]. If A is ambiguous, we either determinise it or disambiguate it [37], leading to a DFA/UFA \(A'\), respectively.Footnote 1 Then, we construct the trimmed product of \(A'\) and P (this can be seen as computing \(A' \cap supp (P)\) while keeping the probabilities from P on the edges of the result), yielding a PPA \(R = (\varvec{\alpha }, \varvec{\gamma }, \{\varvec{\varDelta }_a\}_{a \in \varSigma })\).Footnote 2 Intuitively, R represents not only the words of L(A) but also their probability in P. Now, let \(\varvec{\varDelta }= \sum _{a \in \varSigma } \varvec{\varDelta }_a\) be the matrix that expresses, for any \(p,q \in Q[R]\), the significance of getting from p to q via any \(a \in \varSigma \). Further, it can be shown (cf. the proof of Lemma 1 in [43]) that the matrix \(\varvec{\varDelta }^*\), representing the significance of going from p to q via any \(w \in \varSigma ^*\), can be computed as \((\varvec{I}- \varvec{\varDelta })^{-1}\). Then, to get \(\mathrm {Pr}_P(L(A))\), it suffices to take \(\varvec{\alpha }^\top \cdot \varvec{\varDelta }^*\cdot \varvec{\gamma }\). Note that, due to the determinisation/disambiguation step, the obtained value indeed is \(\mathrm {Pr}_P(L(A))\) despite R being a PPA.

3.2 Automata Reduction Using Probabilistic Distance

We now exploit the above introduced probabilistic distance to formulate the task of approximate reduction of NFAs as the following two optimisation problems. Given an NFA A and a PA P specifying the distribution \(\mathrm {Pr}_P: \varSigma ^* \rightarrow \langle 0, 1\rangle \), we define

  • size-driven reduction: for \(n \in \mathbb {N}\), find an NFA \(A'\) such that \(|A'| \le n\) and the distance \(d_P(A, A')\) is minimal,

  • error-driven reduction: for \(\epsilon \in \langle 0, 1\rangle \), find an NFA \(A'\) such that \(d_P(A, A') \le \epsilon \) and the size \(|A'|\) is minimal.

The following lemma shows that the natural decision problem underlying both of the above optimization problems is PSPACE-complete, which matches the complexity of computing the probabilistic distance as well as that of the exact reduction of NFAs [38].

Lemma 2

Consider an NFA A, a PA P, a bound on the number of states \(n \in \mathbb {N}\), and an error bound \(\epsilon \in \langle 0, 1\rangle \). It is PSPACE-complete to determine whether there exists an NFA \(A'\) with n states s.t. \(d_P(A, A') \le \epsilon \).

The notions defined above do not distinguish between introducing a false positive (\(A'\) accepts a word \(w \notin L(A)\)) or a false negative (\(A'\) does not accept a word \(w \in L(A)\)) answers. To this end, we define over-approximating and under-approximating reductions as reductions for which the additional conditions \(L(A) \subseteq L(A')\) and \(L(A) \supseteq L(A')\) hold, respectively.

A naïve solution to the reductions would enumerate all NFAs \(A'\) of sizes from 0 up to k (resp. |A|), for each of them compute \(d_P(A, A')\), and take an automaton with the smallest probabilistic distance (resp. a smallest one satisfying the restriction on \(d_P(A, A')\)). Obviously, this approach is computationally infeasible.

4 A Heuristic Approach to Approximate Reduction

In this section, we introduce two techniques for approximate reduction of NFAs that avoid the need to iterate over all automata of a certain size. The first approach under-approximates the automata by removing states—we call it the pruning reduction—while the second approach over-approximates the automata by adding self-loops to states and removing redundant states—we call it the self-loop reduction. Finding an optimal automaton using these reductions is also PSPACE-complete, but more amenable to heuristics like greedy algorithms. We start with introducing two high-level greedy algorithms, one for the size- and one for the error-driven reduction, and follow by showing their instantiations for the pruning and the self-loop reduction. A crucial role in the algorithms is played by a function that labels states of the automata by an estimate of the error that will be caused when some of the reductions is applied at a given state.

figure b

4.1 A General Algorithm for Size-Driven Reduction

Algorithm 1 shows a general greedy method for performing the size-driven reduction. In order to use the same high-level algorithm in both directions of reduction (over/under-approximating), it is parameterized with three functions: \( label , reduce \), and \( error \). The real intricacy of the procedure is hidden inside these three functions. Intuitively, \( label (A, P)\) assigns every state of an NFA A an approximation of the error that will be caused wrt the PA P when a reduction is applied at this state, while the purpose of \( reduce (A,V)\) is to create a new NFA \(A'\) obtained from A by introducing some error at states from V.Footnote 3 Further, \( error (A,V, label (A, P))\) estimates the error introduced by the application of \( reduce (A,V)\), possibly in a more precise (and costly) way than by just summing the concerned error labels: Such a computation is possible outside of the main computation loop. We show instantiations of these functions later, when discussing the reductions used. Moreover, the algorithm is also parameterized with a total order \(\mathrel {\preceq }_{A, label (A, P)}\) that defines which states of A are processed first and which are processed later. The ordering may take into account the precomputed labelling. The algorithm accepts an NFA A, a PA P, and \(n \in \mathbb {N}\) and outputs a pair consisting of an NFA \(A'\) of the size \(|A'| \le n\) and an error bound \(\epsilon \) such that \(d_P(A, A') \le \epsilon \).

The main idea of the algorithm is that it creates a set V of states where an error is to be introduced. V is constructed by starting from an empty set and adding states to it in the order given by \(\mathrel {\preceq }_{A, label (A, P)}\), until the size of the result of \( reduce (A,V)\) has reached the desired bound n (in our setting, \( reduce \) is always antitone, i.e., for \(V \subseteq V'\), it holds that \(| reduce (A,V)| \ge | reduce (A,V')|\)). We now define the necessary condition for \( label , reduce \), and \( error \) that makes Algorithm 1 correct.

Condition

C1 holds if for every NFA A, PA P, and a set \(V \subseteq Q[A]\), we have that (a) \( error (A,V, label (A, P)) \ge d_P(A, reduce (A,V))\), (b) \(| reduce (A,Q[A])| \le 1\), and (c) \( reduce (A,\emptyset ) = A\).

C1(a) ensures that the error computed by the reduction algorithm indeed over-approximates the exact probabilistic distance, C1(b) ensures that the algorithm can (in the worst case, by applying the reduction at every state of A) for any \(n \ge 1\) output a result \(|A'|\) of the size \(|A'|\le n\), and C1(c) ensures that when no error is to be introduced at any state, we obtain the original automaton.

Lemma 3

Algorithm 1 is correct if C1 holds.

figure c

4.2 A General Algorithm for Error-Driven Reduction

In Algorithm 2, we provide a high-level method of computing the error-driven reduction. The algorithm is in many ways similar to Algorithm 1; It also computes a set of states V where an error is to be introduced, but an important difference is that we compute an approximation of the error in each step and only add q to V if it does not raise the error over the threshold \(\epsilon \). Note that the \( error \) does not need to be monotone, so it may be advantageous to traverse all states from Q and not terminate as soon as the threshold is reached. The correctness of Algorithm 2 also depends on C1.

Lemma 4

Algorithm 2 is correct if C1 holds.

4.3 Pruning Reduction

The pruning reduction is based on identifying a set of states to be removed from an NFA A, under-approximating the language of A. In particular, for \(A = (Q, \delta , I, F)\), the pruning reduction finds a set \(R \subseteq Q\) and restricts A to \(Q\setminus R\), followed by removing useless states, to construct a reduced automaton \(A' = trim (A_{|Q\setminus R})\). Note that the natural decision problem corresponding to this reduction is also PSPACE-complete.

Lemma 5

Consider an NFA A, a PA P, a bound on the number of states \(n \in \mathbb {N}\), and an error bound \(\epsilon \in \langle 0, 1\rangle \). It is PSPACE-complete to determine whether there exists a subset of states \(R \subseteq Q[A]\) of the size \(|R| = n\) such that \(d_P(A, A_{|R}) \le \epsilon \).

Although Lemma 5 shows that the pruning reduction is as hard as a general reduction (cf. Lemma 2), the pruning reduction is more amenable to the use of heuristics like the greedy algorithms from Sects. 4.1 and 4.2. We instantiate \( reduce \), \( error \), and \( label \) in these high-level algorithms in the following way (the subscript p means pruning):

$$\begin{aligned} reduce _p(A,V) = trim (A_{|Q\setminus V}),&error _p(A,V,\ell )= \min _{V' \in \lfloor V \rfloor _p} \sum \left\{ \ell (q) \mid q\in V'\right\} ,\\[-9mm] \end{aligned}$$

where \(\lfloor V \rfloor _p\) is defined as follows. Because of the use of \( trim \) in \( reduce _p\), for a pair of sets \(V, V'\) s.t. \(V \subset V'\), it holds that \( reduce _p(A,V)\) may, in general, yield the same automaton as \( reduce _p(A,V')\). Hence, we define a partial order \(\mathrel {\sqsubseteq _p}\) on \(2^Q\) as \(V_1 \mathrel {\sqsubseteq _p}V_2\) iff \( reduce _p(A,V_1) = reduce _p(A,V_2)\) and \(V_1 \subseteq V_2\), and use \(\lfloor V \rfloor _p\) to denote the set of minimal elements wrt V and \(\mathrel {\sqsubseteq _p}\). The value of the approximation \( error _p(A,V,\ell )\) is therefore the minimum of the sum of errors over all sets from \(\lfloor V \rfloor _p\).

Note that the size of \(\lfloor V \rfloor _p\) can again be exponential, and thus we employ a greedy approach for guessing an optimal \(V'\). Clearly, this cannot affect the soundness of the algorithm, but only decreases the precision of the bound on the distance. Our experiments indicate that for automata appearing in NIDSes, this simplification has typically only a negligible impact on the precision of the bounds.

For computing the state labelling, we provide the following three functions, which differ in the precision they provide and the difficulty of their computation (naturally, more precise labellings are harder to compute): \( label _p^1, label _p^2\), and \( label _p^3\). Given an NFA A and a PA P, they generate the labellings \(\ell _p^1, \ell _p^2\), and \(\ell _p^3\), respectively, defined as

figure d
figure e

A state label \(\ell (q)\) approximates the error of the words removed from \(L(A)\) when q is removed. More concretely, \(\ell _p^1(q)\) is a rough estimate saying that the error can be bounded by the sum of probabilities of the banguages of all final states reachable from q (in the worst case, all those final states might become unreachable). Note that \(\ell _p^1(q)\) (1) counts the error of a word accepted in two different final states of \( reach (q)\) twice, and (2) also considers words that are accepted in some final state in \( reach (q)\) without going through q. The labelling \(\ell _p^2\) deals with (1) by computing the total probability of the banguage of the set of all final states reachable from q, and the labelling \(\ell _p^3\) in addition also deals with (2) by only considering words that traverse through q (they can still be accepted in some final state not in \( reach (q)\) though, so even \(\ell _p^3\) is still imprecise). Note that if A is unambiguous then \(\ell _p^1 = \ell _p^2\).

When computing the label of q, we first modify A to obtain \(A'\) accepting the language related to the particular labelling. Then, we compute the value of \(\mathrm {Pr}_P(L(A'))\) using the algorithm from Sect. 3.1. Recall that this step is in general costly, due to the determinisation/disambiguation of \(A'\). The key property of the labelling computation resides in the fact that if A is composed of several disjoint sub-automata, the automaton \(A'\) is typically much smaller than A and thus the computation of the label is considerable less demanding. Since the automata appearing in regex matching for NIDS are composed of the union of “tentacles”, the particular \(A'\)s are very small, which enables efficient component-wise computation of the labels.

The following lemma states the correctness of using the pruning reduction as an instantiation of Algorithms 1 and 2 and also the relation among \(\ell _p^1\), \(\ell _p^2\), and \(\ell _p^3\).

Lemma 6

For every \(x\in \{1,2,3\}\), the functions \( reduce _p\), \( error _p\), and \( label _p^x\) satisfy C1. Moreover, consider an NFA A, a PA P, and let \(\ell _p^x = label _p^x(A, P)\) for \(x \in \{1,2,3\}\). Then, for each \(q \in Q[A]\), we have \(\ell _p^1(q) \ge \ell _p^2(q) \ge \ell _p^3(q)\).

4.4 Self-loop Reduction

The main idea of the self-loop reduction is to over-approximate the language of A by adding self-loops over every symbol at selected states. This makes some states of A redundant, allowing them to be removed without introducing any more error. Given an NFA \(A = (Q, \delta , I, F)\), the self-loop reduction searches for a set of states \(R \subseteq Q\), which will have self-loops added, and removes other transitions leading out of these states, making some states unreachable. The unreachable states are then removed.

Formally, let \( sl (A,R)\) be the NFA \((Q, \delta ', I, F)\) whose transition function \(\delta '\) is defined, for all \(p \in Q\) and \(a \in \varSigma \), as \(\delta '(p,a) = \{p\}\) if \(p \in R\) and \(\delta '(p,a) = \delta (p,a)\) otherwise. As with the pruning reduction, the natural decision problem corresponding to the self-loop reduction is also PSPACE-complete.

Lemma 7

Consider an NFA A, a PA P, a bound on the number of states \(n \in \mathbb {N}\), and an error bound \(\epsilon \in \langle 0, 1\rangle \). It is PSPACE-complete to determine whether there exists a subset of states \(R \subseteq Q[A]\) of the size \(|R| = n\) such that \(d_P(A, sl (A,R)) \le \epsilon \).

The required functions in the error- and size-driven reduction algorithms are instantiated in the following way (the subcript \( sl \) means self-loop):

$$\begin{aligned} reduce _{ sl }(A,V) = trim ( sl (A,V)),&error _{ sl }(A,V,\ell )= \sum \left\{ \ell (q) \mid q\in \min \left( \lfloor V \rfloor _ sl \right) \right\} , \end{aligned}$$

where \(\lfloor V \rfloor _ sl \) is defined in a similar manner as \(\lfloor V \rfloor _p\) in the previous section (using a partial order \(\mathrel {\sqsubseteq _ sl }\) defined similarly to \(\mathrel {\sqsubseteq _p}\); in this case, the order \(\mathrel {\sqsubseteq _ sl }\) has a single minimal element, though).

The functions \( label _{ sl }^1, label _{ sl }^2\), and \( label _{ sl }^3\) compute the state labellings \(\ell _{ sl }^1, \ell _{ sl }^2\), and \(\ell _{ sl }^3\) for an NFA A and a PA P defined as follows:

$$\begin{aligned} \ell _{ sl }^1(q)&= weight _{P}(L^{\flat }_{A}(q)),&\ell _{ sl }^2(q)&= \mathrm {Pr}_P\left( L^{\flat }_{A}(q).\varSigma ^*\right) , \end{aligned}$$
$$\begin{aligned} \ell _{ sl }^3(q)&= \ell _{ sl }^2(q) - \mathrm {Pr}_P\left( L^{\flat }_{A}(q).L_{A}(q)\right) . \end{aligned}$$

Above, \( weight _{P}(w)\) for a PA \(P = (\varvec{\alpha }, \varvec{\gamma }, \{\varvec{\varDelta }_a\}_{a \in \varSigma })\) and a word \(w \in \varSigma ^*\) is defined as \( weight _{P}(w) = \varvec{\alpha }^\top \cdot \varvec{\varDelta }_w \cdot \varvec{1}\) (i.e., similarly as \(\mathrm {Pr}_P(w)\) but with the final weights \(\varvec{\gamma }\) discarded), and \( weight _{P}(L)\) for \(L \subseteq \varSigma ^*\) is defined as \( weight _{P}(L) = \sum _{w \in L} weight _{P}(w)\).

Intuitively, the state labelling \(\ell _{ sl }^1(q)\) computes the probability that q is reached from an initial state, so if q is pumped up with all possible word endings, this is the maximum possible error introduced by the added word endings. This has the following sources of imprecision: (1) the probability of some words may be included twice, e.g., when \(L^{\flat }_{A}(q) = \{a, ab\}\), the probabilities of all words from \(\{ab\}.\varSigma ^*\) are included twice in \(\ell _{ sl }^1(q)\) because \(\{ab\}.\varSigma ^* \subseteq \{a\}.\varSigma ^*\), and (2) \(\ell _{ sl }^1(q)\) can also contain probabilities of words that are already accepted on a run traversing q. The state labelling \(\ell _{ sl }^2\) deals with (1) by considering the probability of the language \(L^{\flat }_{A}(q). \varSigma ^*\), and \(\ell _{ sl }^3\) deals also with (2) by subtracting from the result of \(\ell _{ sl }^2\) the probabilities of the words that pass through q and are accepted.

The computation of the state labellings for the self-loop reduction is done in a similar way as the computation of the state labellings for the pruning reduction (cf. Sect. 4.3). For a computation of \( weight _{P}(L)\) one can use the same algorithm as for \(\mathrm {Pr}_P(L)\), only the final vector for PA P is set to \(\varvec{1}\). The correctness of Algorithms 1 and 2 when instantiated using the self-loop reduction is stated in the following lemma.

Lemma 8

For every \(x\in \{1,2,3\}\), the functions \( reduce _{ sl }\), \( error _{ sl }\), and \( label _{ sl }^x\) satisfy C1. Moreover, consider an NFA A, a PA P, and let \(\ell _{ sl }^x = label _{ sl }^x(A, P)\) for \(x \in \{1,2,3\}\). Then, for each \(q \in Q[A]\), we have \(\ell _{ sl }^1(q) \ge \ell _{ sl }^2(q) \ge \ell _{ sl }^3(q)\).

5 Reduction of NFAs in Network Intrusion Detection Systems

We have implemented our approach in a Python prototype named Appreal (APProximate REduction of Automata and Languages)Footnote 4 and evaluated it on the use case of network intrusion detection using Snort [1], a popular open source NIDS. The version of Appreal used for the evaluation in the current paper is available as an artifact [44] for the TACAS’18 artifact virtual machine [45].

5.1 Network Traffic Model

The reduction we describe in this paper is driven by a probabilistic model representing a distribution over \(\varSigma ^*\), and the formal guarantees are also wrt this model. We use learning to obtain a model of network traffic over the 8-bit ASCII alphabet at a given network point. Our model is created from several gigabytes of network traffic from a measuring point of the CESNET Internet provider connected to a 100 Gbps backbone link (unfortunately, we cannot provide the traffic dump since it may contain sensitive data).

Learning a PA representing the network traffic faithfully is hard. The PA cannot be too specific—although the number of different packets that can occur is finite, it is still extremely large (a conservative estimate assuming the most common scenario Ethernet/IPv4/TCP would still yield a number over \(2^{10,000}\)). If we assigned non-zero probabilities only to the packets from the dump (which are less than \(2^{20}\)), the obtained model would completely ignore virtually all packets that might appear on the network, and, moreover, the model would also be very large (millions of states), making it difficult to use in our algorithms. A generalization of the obtained traffic is therefore needed.

A natural solution is to exploit results from the area of PA learning, such as [39, 40]. Indeed, we experimented with the use of Alergia [39], a learning algorithm that constructs a PA from a prefix tree (where edges are labelled with multiplicities) by merging nodes that are “similar.” The automata that we obtained were, however, too general. In particular, the constructed automata destroyed the structure of network protocols—the merging was too permissive and the generalization merged distant states, which introduced loops over a very large substructure in the automaton (such a case usually does not correspond to the design of network protocols). As a result, the obtained PA more or less represented the Poisson distribution, having essentially no value for us.

In Sect. 5.2, we focus on the detection of malicious traffic transmitted over HTTP. We take advantage of this fact and create a PA representing the traffic while taking into account the structure of HTTP. We start by manually creating a DFA that represents the high-level structure of HTTP. Then, we proceed by feeding 34,191 HTTP packets from our sample into the DFA, at the same time taking notes about how many times every state is reached and how many times every transition is taken. The resulting PA \(P_{ HTTP }\) (of 52 states) is then obtained from the DFA and the labels in the obvious way.

The described method yields automata that are much better than those obtained using Alergia in our experiments. A disadvantage of the method is that it is only semi-automatic—the basic DFA needed to be provided by an expert. We have yet to find an algorithm that would suit our needs for learning more general network traffic.

5.2 Evaluation

We start this section by introducing the experimental setting, namely, the integration of our reduction techniques into the tool chain implementing efficient regex matching, the concrete settings of Appreal, and the evaluation environment. Afterwards, we discuss the results evaluating the quality of the obtained approximate reductions as well as of the provided error bounds. Finally, we present the performance of our approach and discuss its key aspects. Due to the lack of space, we selected the most interesting results demonstrating the potential as well as the limitations of our approach.

General Setting. Snort detects malicious network traffic based on rules that contain conditions. The conditions may take into consideration, among others, network addresses, ports, or Perl compatible regular expressions (PCREs) that the packet payload should match. In our evaluation, we always select a subset of Snort rules, extract the PCREs from them, and use Netbench [20] to transform them into a single NFA A. Before applying \(\textsc {Appreal}{}\), we use the state-of-the-art NFA reduction tool Reduce [41] to decrease the size of A. Reduce performs a language-preserving reduction of A using advanced variants of simulation [31] (in the experiment reported in Table 3, we skip the use of Reduce at this step as discussed in the performance evaluation). The automaton \(A^{\textsc {Red}}\) obtained as the result of Reduce is the input of Appreal, which performs one of the approximate reductions from Sect. 4 wrt the traffic model \(P_{ HTTP }\), yielding \(A^{\textsc {App}}\). After the approximate reduction, we, one more time, use Reduce and obtain the result \(A'\).

Settings of Appreal. In the use case of NIDS pre-filtering, it may be important to never introduce a false negative, i.e., to never drop a malicious packet. Therefore, we focus our evaluation on the self-loop reduction (Sect. 4.4). In particular, we use the state labelling function \( label _{ sl }^2\), since it provides a good trade-off between the precision and the computational demands (recall that the computation of \( label _{ sl }^2\) can exploit the “tentacle” structure of the NFAs we work with). We give more attention to the size-driven reduction (Sect. 4.1) since, in our setting, a bound on the available FPGA resources is typically given and the task is to create an NFA with the smallest error that fits inside. The order \(\mathrel {\preceq }_{A,\ell _{ sl }^2}\) over states used in Sects. 4.1 and 4.2 is defined as \(s \mathrel {\preceq }_{A,\ell _{ sl }^2}s' \Leftrightarrow \ell _{ sl }^2(s) \le \ell _{ sl }^2(s')\).

Evaluation Environment. All experiments run on a 64-bit Linux Debian workstation with the Intel Core(TM) i5-661 CPU running at 3.33 GHz with 16 GiB of RAM.

Description of Tables. In the caption of every table, we provide the name of the input file (in the directory regexps/tacas18/ of the repository of Appreal) with the selection of Snort regexes used in the particular experiment, together with the type of the reduction (size- or error-driven). All reductions are over-approximating (self-loop reduction). We further provide the size of the input automaton |A|, the size after the initial processing by Reduce (\(|A^{\textsc {Red}}|\)), and the time of this reduction (\( time (\textsc {Reduce})\)). Finally, we list the times of computing the state labelling \( label _{ sl }^2\) on \(A^{\textsc {Red}}\) (\( time ( label _{ sl }^2)\)), the exact probabilistic distance (\( time (\text {Exact})\)), and also the number of look-up tables (\( LUTs (A^{\textsc {Red}})\)) consumed on the targeted FPGA (Xilinx Virtex 7 H580T) when \(A^{\textsc {Red}}\) was synthesized (more on this in Sect. 5.3). The meaning of the columns in the tables is the following:

  • \(\varvec{k}/\varvec{\epsilon }\) is the parameter of the reduction. In particular, \(\varvec{k}\) is used for the size-driven reduction and denotes the desired reduction ration \(k = \frac{n}{|A^{\textsc {Red}}|}\) for an input NFA \(A^{\textsc {Red}}\) and the desired size of the output n. On the other hand, \(\varvec{\epsilon }\) is the desired maximum error on the output for the error-driven reduction.

  • \(|\varvec{A}^{\textsc {App}}|\) shows the number of states of the automaton \(A^{\textsc {App}}\) after the reduction by Appreal and the time the reduction took (we omit it when it is not interesting).

  • \(\varvec{|A'|}\) contains the number of states of the NFA \(A'\) obtained after applying Reduce on \(A^{\textsc {App}}\) and the time used by Reduce at this step (omitted when not interesting).

  • Error bound shows the estimation of the error of \(A'\) as determined by the reduction itself, i.e., it is the probabilistic distance computed by the function \( error \) in Sect. 4.

  • Exact error contains the values of \(d_{P_{ HTTP }}(A, A')\) that we computed after the reduction in order to evaluate the precision of the result given in Error bound. The computation of this value is very expensive (time(Exact)) since it inherently requires determinisation of the whole automaton A. We do not provide it in Table 3 (presenting the results for the automaton \(A_{\texttt {bd}}\) with 1,352 states) because the determinisation ran out of memory (the step is not required in the reduction process).

  • Traffic error shows the error that we obtained when compared \(A'\) with A on an HTTP traffic sample, in particular the ratio of packets misclassified by \(A'\) to the total number of packets in the sample (242,468). Comparing Exact error with Traffic error gives us a feedback about the fidelity of the traffic model \(P_{ HTTP }\). We note that there are no guarantees on the relationship between Exact error and Traffic error.

  • LUTs is the number of LUTs consumed by \(A'\) when synthesized into the FPGA. Hardware synthesis is a costly step so we provide this value only for selected NFAs.

Table 1. Results for the http-malicious regex, \(|A_{\texttt {mal}}| = 249\), \(|A_{\texttt {mal}}^{\textsc {Red}}| = 98\), \( time (\textsc {Reduce}{}) = 3.5\) s, \( time ( label _{ sl }^2) = 38.7\) s, \( time (\text {Exact}) = {}\) 3.8–6.5 s, and \( LUTs (A_{\texttt {mal}}^{\textsc {Red}}) = 382\).

Approximation Errors

Table 1 presents the results of the self-loop reduction for the NFA \(A_{\texttt {mal}}\) describing http-malicious regexes. We can observe that the differences between the upper bounds on the probabilistic distance and its real value are negligible (typically in the order of \(10^{-4}\) or less). We can also see that the probabilistic distance agrees with the traffic error. This indicates a good quality of the traffic model employed in the reduction process. Further, we can see that our approach can provide useful trade-offs between the reduction error and the reduction factor. Finally, Table 1 shows that a significant reduction is obtained when the error threshold \(\epsilon \) is increased from 0.04 to 0.07.

Table 2. Results for the http-attacks regex, size-driven reduction, \(|A_{\texttt {att}}| = 142\), \(|A_{\texttt {att}}^{\textsc {Red}}| = 112\), \( time (\textsc {Reduce}{}) = 7.9\) s, \( time ( label _{ sl }^2) = 28.3\) min, \( time (\text {Exact}) = {}\) 14.0–16.4 min.

Table 2 presents the results of the size-driven self-loop reduction for NFA \(A_{\texttt {att}}\) describing http-attacks regexes. We can observe that the error bounds provide again a very good approximation of the real probabilistic distance. On the other hand, the difference between the probabilistic distance and the traffic error is larger than for \(A_{\texttt {mal}}\). Since all experiments use the same probabilistic automaton and the same traffic, this discrepancy is accounted to the different set of packets that are incorrectly accepted by \(A_{\texttt {att}}^{\textsc {Red}}\). If the probability of these packets is adequately captured in the traffic model, the difference between the distance and the traffic error is small and vice versa. This also explains an even larger difference in Table 3 (presenting the results for \(A_{\texttt {bd}}\) constructed from http-backdoor regexes) for \(k \in \langle 0.2, 0.4\rangle \). Here, the traffic error is very small and caused by a small set of packets (approx. 70), whose probability is not correctly captured in the traffic model. Despite this problem, the results clearly show that our approach still provides significant reductions while keeping the traffic error small: about a 5-fold reduction is obtained for the traffic error 0.03 % and a 10-fold reduction is obtained for the traffic error 6.3 %. We discuss the practical impact of such a reduction in Sect. 5.3.

Performance of the Approximate Reduction

Table 3. Results for http-backdoor, size-driven reduction, \(|A_{\texttt {bd}}| = 1,352\), \( time ( label _{ sl }^2) = 19.9\) min, \( LUTs (A_{\texttt {bd}}^{\textsc {Red}}) = 2,266\).

In all our experiments (Tables 1, 2 and 3), we can observe that the most time-consuming step of the reduction process is the computation of state labellings (it takes at least 90 % of the total time). The crucial observation is that the structure of the NFAs fundamentally affects the performance of this step. Although after Reduce, the size of \(A_{\texttt {mal}}\) is very similar to the size of \(A_{\texttt {att}}\), computing \( label _{ sl }^2\) takes more time (28.3 min vs. 38.7 s). The key reason behind this slowdown is the determinisation (or alternatively disambiguation) process required by the product construction underlying the state labelling computation (cf. Sect. 4.4). For \(A_{\texttt {att}}\), the process results in a significantly larger product when compared to the product for \(A_{\texttt {mal}}\). The size of the product directly determines the time and space complexity of solving the linear equation system required for computing the state labelling.

As explained in Sect. 4, the computation of the state labelling \( label _{ sl }^2\) can exploit the “tentacle” structure of the NFAs appearing in NIDSes and thus can be done component-wise. On the other hand, our experiments reveal that the use of Reduce typically breaks this structure and thus the component-wise computation cannot be effectively used. For the NFA \(A_{\texttt {mal}}\), this behaviour does not have any major performance impact as the determinisation leads to a moderate-sized automaton and the state labelling computation takes less than 40 s. On the other hand, this behaviour has a dramatic effect for the NFA \(A_{\texttt {att}}\). By disabling the initial application of Reduce and thus preserving the original structure of \(A_{\texttt {att}}\), we were able to speed up the state label computation from 28.3 min to 1.5 min. Note that other steps of the approximate reduction took a similar time as before disabling Reduce and also that the trade-offs between the error and the reduction factor were similar. Surprisingly, disabling Reduce caused that the computation of the exact probabilistic distance became computationally infeasible because the determinisation ran out of memory.

Due to the size of the NFA \(A_{\texttt {bd}}\), the impact of disabling the initial application of Reduce is even more fundamental. In particular, computing the state labelling took only 19.9 min, in contrast to running out of memory when the Reduce is applied in the first step (therefore, the input automaton is not processed by Reduce in Table 3; we still give the number of LUTs of its reduced version for comparison, though). Note that the size of \(A_{\texttt {bd}}\) also slows down other reduction steps (the greedy algorithm and the final Reduce reduction). We can, however, clearly see that computing the state labelling is still the most time-consuming step.

5.3 The Real Impact in an FPGA-Accelerated NIDS

Further, we also evaluated some of the obtained automata in the setting of [5] implementing a high-speed NIDS pre-filter. In that setting, the amount of resources available for the regex matching engine is 15,000 LUTsFootnote 5 and the frequency of the engine is 200 MHz. We synthesized NFAs that use a 32-bit-wide data path, corresponding to processing 4 ASCII characters at once, which is—according to the analysis in [5]—the best trade-off between the utilization of the chip resources and the maximum achievable frequency. A simple analysis shows that the throughput of one automaton is 6.4 Gbps, so in order to reach the desired link speed of 100 Gbps, 16 units are required, and 63 units are needed to handle 400 Gbps. With the given amount of LUTs, we are therefore bounded by 937 LUTs for 100 Gbps and 238 LUTs for 400 Gbps.

We focused on the consumption of LUTs by an implementation of the regex matching engines for http-backdoor (\(A_{\texttt {bd}}^{\textsc {Red}}\)) and http-malicious (\(A_{\texttt {mal}}^{\textsc {Red}}\)).

  • 100 Gbps: For this speed, \(A_{\texttt {mal}}^{\textsc {Red}}\) can be used without any approximate reduction as it is small enough to fit in the available space. On the other hand, \(A_{\texttt {bd}}^{\textsc {Red}}\) without the approximate reduction is way too large to fit (at most 6 units fit inside the available space, yielding the throughput of only 38.4 Gbps, which is unacceptable). The column LUTs in Table 3 shows that using our framework, we are able to reduce \(A_{\texttt {bd}}^{\textsc {Red}}\) such that it uses 894 LUTs (for \(\varvec{k}\) = 0.3), and so all the needed 16 units fit into the FPGA, yielding the throughput over 100 Gbps and the theoretical error bound of a false positive \(\le 3.4\times 10^{-8}\) wrt the model \(P_{ HTTP }\).

  • 400 Gbps: Regex matching at this speed is extremely challenging. The only reduced version of \(A_{\texttt {bd}}^{\textsc {Red}}\) that fits in the available space is the one for the value \(\varvec{k} = 0.1\) with the error bound almost 1. The situation is better for \(A_{\texttt {mal}}^{\textsc {Red}}\). In the exact version, at most 39 units can fit inside the FPGA with the maximum throughput of 249.6 Gbps. On the other hand, when using our approximate reduction framework, we are able to place 63 units into the FPGA, each of the size 224 LUTs (\(\varvec{k}\) = 0.6) with the throughput over 400 Gbps and the theoretical error bound of a false positive \(\le 8.7\times 10^{-8}\) wrt the model \(P_{ HTTP }\).

6 Conclusion

We have proposed a novel approach for approximate reduction of NFAs used in network traffic filtering. Our approach is based on a proposal of a probabilistic distance of the original and reduced automaton using a probabilistic model of the input network traffic, which characterizes the significance of particular packets. We characterized the computational complexity of approximate reductions based on the described distance and proposed a sequence of heuristics allowing one to perform the approximate reduction in an efficient way. Our experimental results are quite encouraging and show that we can often achieve a very significant reduction for a negligible loss of precision. We showed that using our approach, FPGA-accelerated network filtering on large traffic speeds can be applied on regexes of malicious traffic where it could not be applied before.

In the future, we plan to investigate other approximate reductions of the NFAs, maybe using some variant of abstraction from abstract regular model checking [42], adapted for the given probabilistic setting. Another important issue for the future is to develop better ways of learning a suitable probabilistic model of the input traffic.

Data Availability Statement and Acknowledgements. The tool used for the experimental evaluation in the current study is available in the following figshare repository: https://doi.org/10.6084/m9.figshare.5907055.v1. We thank Jan Kořenek, Vlastimil Košař, and Denis Matoušek for their help with translating regexes into automata and synthesis of FPGA designs, and Martin Žádník for providing us with the backbone network traffic. We thank Stefan Kiefer for helping us proving the PSPACE part of Lemma 1 and Petr Peringer for testing our artifact. The work on this paper was supported by the Czech Science Foundation project 16-17538S, the IT4IXS: IT4Innovations Excellence in Science project (LQ1602), and the FIT BUT internal project FIT-S-17-4014.