Introduction

Text compression (Bell et al., 1990) permits representing a document using less space. This is useful not only to save disk space, but more importantly, to save disk transfer and network transmission time. In recent years, compression techniques especially designed for natural language texts have not only proven extremely effective (with compression ratios1 around 25%–35%), but also permitted searching the compressed text much faster (up to 8 times) than the original text (Turpin and Moffat, 1997; Moura et al., 1998, 2000). The integration of compression and indexing techniques (Witten et al., 1999; Navarro et al., 2000; Ziviani et al., 2000) opened the door to compressed text databases, where texts and indexes are manipulated directly in compressed form, thus saving both time and space.

Not every compression method is suitable for a compressed text database. The compressed text should satisfy three basic conditions: (1) it can be directly accessed at random positions, (2) it can be decompressed fast, and (3) it can be searched fast. The rationale of conditions (1) and (2) is pretty obvious, as one wishes to display individual documents to final users without need of decompressing the whole text collection preceding it. Moreover, in many cases it is necessary to display only a snippet of the text around an occurrence position, and thus it must be possible to start decompression from any position of the compressed text, not only from the beginning of a document or even from the beginning of a codeword. Condition (3) could be unnecessary if an inverted index pointing to all text words were available, yet such indexes take up significant extra space (Baeza-Yates and Ribeiro-Neto, 1999). Alternatively, one can use inverted indexes pointing to whole documents, which are still able of solving one-word queries without accessing the text. Yet, more complex queries such as phrase and proximity queries will require some direct text processing in order to filter out documents containing all the query terms in the wrong order. Moreover, some space-time tradeoffs in inverted indexes are based on grouping documents into blocks, and therefore sequential scanning is necessary even on single-word queries (Manber and Wu, 1994; Navarro et al., 2000). Although partial decompression followed by searching is a solution, direct search of the compressed text is much more efficient (Ziviani et al., 2000).

Classic compression techniques are generally unattractive for compressing text databases. For example, the well-known algorithms of Ziv and Lempel (1977, 1978) permit searching the text directly, without decompressing it, in half the time necessary for decompression (Navarro and Tarhio, 2000, 2005). Yet, as any other adaptive compression technique, it does not permit direct random access to the compressed text, thus failing on Condition (1). Semistatic techniques are necessary to ensure that the decoder can start working from any point of the compressed text without having seen all the previous text. Semistatic techniques also permit fast direct search of the compressed text, by (essentially) compressing the pattern and searching for it in the compressed text. This does not work on adaptive methods, as the pattern does not appear in the same form across the compressed text.

Classic semistatic compression methods, however, are not entirely satisfactory either. For example, the Huffman (1952) code offers direct random access from codeword beginnings and decent decompression and direct search speeds (Miyazaki et al., 1998), yet the compression ratio of the Huffman code on natural language is poor (around 65%).

The key to the success of natural language compressed text databases is the use of a semistatic word-based model by Moffat (1989), so that the text is regarded as a sequence of words (and separators). A word-based Huffman code (Turpin and Moffat, 1997), where codewords are sequences of bits, achieves 25% of compression ratio, although decompression and search are not so fast because of the need of bit-wise manipulations. A byte-oriented word-based Huffman code, called Plain Huffman Code (PHC) by Moura et al. (2000), eliminates this problem by using 256-ary Huffman trees, so that codewords are sequences of bytes. As a result, decompression and search are faster, although compression ratios rise to 30%. As the compression ratio is still attractive, they also propose Tagged Huffman Code (THC), whose compression ratio is around 35% but permits much faster Boyer-Moore-type search directly in the compressed text, as well as decompression from any point of the compressed file (even if not codeword-aligned).

In this paper, we improve the existing tradeoffs on word-based semistatic compression, presenting two new compression techniques that allow direct searching and direct access to the compressed text. Roughly, we achieve the same search performance and capabilities of Tagged Huffman Code, combined with compression ratios similar to those of Plain Huffman Code. Our techniques have the additional attractiveness of being very simple to program.

We first introduce End-Tagged Dense Code (ETDC), a compression technique that allows (i) efficient decompression of arbitrary portions of the text (direct access), and (ii) efficient Boyer-Moore-type search directly on the compressed text, just as in Tagged Huffman Code. End-Tagged Dense Code improves both Huffman codes in encoding/decoding speed. It also improves Tagged Huffman Code in compression ratio, while retaining similar search time and capabilities.

We then present (s,c)-Dense Code (SCDC), a generalization of End-Tagged Dense Code which achieves better compression ratios while retaining all the search capabilities of End-Tagged Dense Code. (s,c)-Dense Code poses only a negligible overhead over the optimal2 compression ratio reached by Plain Huffman Code.

Partial early versions of this paper were presented in Brisaboa et al. (2003a, b) and Fariña (2005).

The outline of this paper is as follows. Section 2 starts with some related work. Section 3 presents our first technique, End-Tagged Dense Code. Next, Section 4 introduces the second technique, (s, c)-Dense Code. Encoding, decoding, and search algorithms for both compression techniques are presented in Section 5. Section 6 is devoted to empirical results. Finally, Section 7 gives our conclusions and future work directions.

Related work

Text compression (Bell et al., 1990) consists of representing a sequence of characters using fewer bits than its original representation. The text is seen as a sequence of source symbols (characters, words, etc.). For the reasons we have explained, we are interested in semistatic methods, where each source symbol is assigned a codeword (that is, a sequence of target symbols), and this assignment does not change across the compression process. The compressed text is then the sequence of codewords assigned to its source symbols. The function that assigns a codeword to each source symbol is called a code. Among the possible codes, prefix codes are preferable in most cases. A prefix code guarantees that no codeword is a prefix of another, thus permitting decoding a codeword right after it is read (hence the alternative name instantaneous code).

The Huffman (1952) code is the optimal (shortest total length) prefix code for any frequency distribution. It has been traditionally applied to text compression by considering characters as source symbols and bits as the target symbols. On natural language texts, this yields poor compression ratios (around 65%). The key idea to the success of semistatic compression on natural language text databases was to consider words as the source symbols (moffat, 1989) (as well as separators, defined as maximal text substrings among consecutive words). The distribution of words in natural language is much more skewed than that of characters, following a Zipf Law (that is, the frequency of the i-th most frequent word is proportional to 1/i θ, for some 1 < θ < 2 (Zipf, 1949; Baeza-Yates and Ribeiro-Neto, 1999), and the separators are even more skewed. As a result, compression ratios get around 25%, which is close to what can be obtained with any other compression method (Bell et al., 1990). The price of having a larger set of source symbols (which semistatic methods must encode together with the compressed text) is not significant on large text collections, as the vocabulary grows slowly (O(N β) symbols on a text of N words, for some β ≈ 0.5, by Heaps Law (Heaps, 1978; Baeza-Yates and Ribeiro-Neto, 1999)).

This solution is acceptable for compressed text databases. With respect to searching those Huffman codes, essentially one can compress the pattern and search the text for it (Turpin and Moffat, 1997; Miyazaki et al., 1998). However, it is necessary to process the text bits sequentially in order to avoid false matches. Those occur because the compressed pattern might appear in a text not aligned to any codeword, that is, the concatenation of two codewords might contain the pattern bit string, yet the pattern is not in the text. A sequential processing ensures that the search is aware of the codeword beginnings and thus false matches are avoided.

With such a large source vocabulary, it makes sense to have a larger target alphabet. The use of bytes as target symbols was explored by Moura et al. (2000), who proposed two byte-oriented word-based Huffman codes as a way to speed up the processing of the compressed text.

The first, Plain Huffman Code (PHC), is no more than a Huffman code where the source symbols are the text words and separators, and the target symbols are bytes. This obtains compression ratios close to 30% on natural language, a 5% of overhead with respect the word-based approach of Moffat (1989), where the target symbols are bits. In exchange, decompression and in general traversal of the compressed text is around 30% faster with Plain Huffman Code, as no bit manipulations are necessary (Moura et al., 2000). This is highly valuable in a compressed text database scenario.

The second code, Tagged Huffman Code (THC), is similar except that it uses the highest bit of each byte to signal the first byte of each codeword. Hence, only 7 bits of each byte are used for the Huffman code. Note that the use of a Huffman code over the remaining 7 bits is mandatory, as the flag is not useful by itself to make the code a prefix code. Compared to Plain Huffman Code, Tagged Huffman Code produces a compressed text around 11% longer reaching 35% of compression ratio.

There are two important advantages to justify this choice in a compressed text database scenario. First, Tagged Huffman Code can be accessed at any position for decompression, even in the middle of a codeword. The flag bit permits easy synchronization to the next or previous codeword. Plain Huffman Code, on the other hand, can start decompression only from codeword beginnings. Second, a text compressed with Tagged Huffman Code can be searched efficiently, by just compressing the pattern word or phrase and then running any classical string matching algorithm for the compressed pattern on the compressed text. In particular, one can use those algorithms able of skipping characters (Boyer and Moore, 1977; Navarro and Raffinot., 2002). This is not possible with Plain Huffman Code, because of the false matches problem. On Tagged Huffman Code false matches are impossible thanks to the flag bits.

It is interesting to point out some approaches that attempt to deal with the false matches problem without scanning every target symbol. The idea is to find a synchronization point, that is, a position in the compressed text where it is sure that a codeword starts. Recently, Klein and Shapira (2005) proposed that once a match of the search pattern is found at position i, a decoding algorithm would start at position iK, being K a constant. It is likely that the algorithm synchronizes itself with the beginning of a codeword before it reaches again position i. However, false matches may still appear, and the paper analyzes the probability of reporting them as true matches.

Another alternative, proposed by Moura et al., (2000), is to align the codeword beginnings to block boundaries of B bytes. That is, no codeword is permitted to cross a B-byte boundary and thus one can start decompression at any point by going back to the last position multiple of B. This way, one can search using any string matching algorithm, and then has to rule out false matches by retraversing the blocks where matches have been found, in order to ensure that those are codeword-aligned. They report best results with B = 256, where they pay a space overhead of 0.78% over Plain Huffman Code and a search time overhead of 7% over Tagged Huffman Code.

Moura et al. (2000) finally show how more complex searches can be carried out. For example, complex patterns that match a single word are first searched for in the vocabulary, and then a multipattern search for all the codewords of the matching vocabulary words is carried out on the text. Sequences of complex patterns can match phrases following the same idea. It is also possible to perform more complex searches, such as approximate matching at the word level (that is, search for a phrase permitting at most k insertions, deletions, replacements, or transpositions of words). Overall, the compressed text not only takes less space than the original text, but it is also searched 2 to 8 times faster.

The combination of this compressed text with compressed indexes (Witten et al., 1999; Navarro et al., 2000) opens the door to compressed text databases where the text is always in compressed form, being decompressed only for presentation purposes (Ziviani et al., 2000).

Huffman coding is a statistical method, in the sense that the codeword assignment is done according to the frequencies of source symbols. There are also some so-called substitution methods suitable for compressed text databases. The earliest usage of a substitution method for direct searching we know of was proposed by Manbar (1997), yet its compression ratios were poor (around 70%). This encoding was a simplified variant of Byte-Pair Encoding (BPE) (Gage, 1994). BPE is a multi-pass method based on finding frequent pairs of consecutive source symbols and replacing them by a fresh source symbol. On natural language text, it obtains a poor compression ratio (around 50%), but its word-based version is much better, achieving compression ratios around 25%–30% (Wan, 2003). It has been shown how to search the character-based version of BPE with competitive performance (Shibata et al., 2000; Takeda et al., 2001), and it is likely that the word-based version can be searched as well. Yet, the major emphasis in the word-based version has been the possibility of browsing over the frequent phrases of the text collection (Wan, 2003).

Other methods with competitive compression ratios on natural language text, yet unable of searching the compressed text faster than the uncompressed text, include Ziv-Lempel compression (Ziv and Lampel, 1977, 1978) (implemented for example in Gnu gzip), Burrows-Wheeler compression (Burrows and Wheeler, 1994) (implemented for example in Seward's bzip2), and statistical modeling with arithmetic coding (Carpinelli et al., 1999).

End-Tagged Dense Code

We obtain End-Tagged Dense Code (ETDC) by a simple change to Tagged Huffman Code (Moura et al., 2000). Instead of using the highest bit to signal the beginning of a codeword, it is used to signal the end of a codeword. That is, the highest bit of codeword bytes is 1 for the last byte (not the first) and 0 for the others.

This change has surprising consequences. Now the flag bit is enough to ensure that the code is a prefix code regardless of the content of the other 7 bits of each byte. To see this, consider two codewords X and Y, where X is shorter than Y (|X| < |Y|). X cannot be a prefix of Y because the last byte of X has its flag bit set to 1, whereas the |X|-th byte of Y has its flag bit set to 0. Thanks to this change, there is no need at all to use Huffman coding in order to ensure a prefix code. Rather, all possible combinations can be used over the remaining 7 bits of each byte, producing a dense encoding. This is the key to improve the compression ratio achieved by Tagged Huffman Code, which has to avoid some values of these 7 bits in each byte, since such values are prefixes of other codewords (remember that the tag bit of THC is not enough to produce a prefix code, and hence a Huffman coding over the remaining 7 bits is mandatory in order to maintain a prefix code). Thus, ETDC yields a better compression ratio than Tagged Huffman Code while keeping all its good searching and decompression capabilities. On the other hand, ETDC is easier to build and faster in both compression and decompression.

Example 1

Assume we have a text with a vocabulary of ten words and that we compress it with target symbols of three bits. Observe in Table 1 the fourth most frequent symbol. Using THC, the target symbol 111 cannot be used as a codeword by itself since it is a prefix of other codewords. However, ETDC can use symbol 111 as a codeword, since it cannot be a prefix of any other codeword due to the flag bit. The same happens with the seventh most frequent word in THC: The target symbols 111 011 cannot be used as a codeword, as again they are reserved as a prefix of other codewords.

Table 1 Comparative example among ETDC and THC, for b = 3

In general, ETDC can be defined over target symbols of b bits, although in this paper we focus on the byte-oriented version where b = 8. ETDC is formally defined as follows.

Definition 1

The b-ary End-Tagged Dense Code assigns to the i-th most frequent source symbol (starting with i = 0), a codeword of k digits in base 2b, where

$$ 2^{b-1}~\frac{2^{(b-1)(k-1)} - 1}{2^{b-1}-1} \leq i < 2^{b-1}~\frac{2^{(b-1)k} - 1} {2^{b-1}-1}. $$

Those k digits are filled with the representation of number \(i - 2^{b-1}~\frac{2^{(b-1)(k-1)} - 1}{2^{b-1}-1} \) in base 2b−1 (most to least significant digit), and we add 2b−1 to the least significant digit (that is, the last digit).

That is, for b = 8, the first word (i = 0) is encoded as 10000000, the second as 10000001, until the 128th as 11111111. The 129th word is encoded as 00000000:10000000, 130th as 00000000:10000001 and so on until the (1282 + 128)th word 01111111:11111111.

The number of words encoded with 1, 2, 3, etc., bytes is fixed (specifically 128, 1282, 1283 and so on). Definition 1 gives the formula for the change points in codeword lengths (ranks \(i = 2^{b-1}~\frac{2^{(b-1)(k-1)} - 1}{2^{b-1}-1} \)).

Note that the code depends on the rank of the words, not on their actual frequency. That is, if we have four words A, B, C, D (ranked 1 to 4) with frequencies 0.36, 0.22, 0.22, and 0.20, respectively, then the code will be the same as if their frequencies were 0.9, 0.09, 0.009, and 0.001. As a result, only the sorted vocabulary must be stored with the compressed text for the decompressor to rebuild the model. Therefore, the vocabulary will be basically of the same size as in the case of Huffman codes, yet Huffman codes need some extra information about the shape of the Huffman tree (which is nevertheless negligible using canonical Huffman trees).

As it can be seen in Table 2, the computation of the code is extremely simple: It is only necessary to sort the source symbols by decreasing frequency and then sequentially assign the codewords taking care of the flag bit. This permits the coding phase to be faster than using Huffman, as obtaining the codewords is simpler.

Table 2 Code assignment in end-tagged dense code

On the other hand, it is also easy to assign the codeword of an isolated rank i. Following Definition 1, it is easy to see that we can encode a rank and decode a codeword in O((log i)/b) time. Section 5 presents those algorithms.

Actually, the idea of ETDC is not new if we see it under a different light. What we are doing is to encode the symbol frequency rank with a variable-length integer representation. The well-known universal codes C, C γ and C ω (Elias, 1975) also assign codewords to source symbols in order of decreasing probability, with shorter codewords for the first positions. Other authors proposed other codes with similar characteristics (Lakshmanan, 1981; Fraenkel and Klein, 1996). These codes yield an average codeword length within a constant factor of the optimal average length. Unfortunately, the constant may be too large for the code to be preferable over one based on the probabilities such as Huffman, which is optimal but needs to know the distribution in advance. The reason is that these codes adapt well only to some probability distributions, which may be far away from those of our interest. More specifically, C α is suitable when the distribution is very skewed (more than our vocabularies), while C γ and C ω behave better when no word is much more likely than any other. These codes do not adjust well to large and moderately skewed vocabularies, as those of text databases. Moreover, we show in Section 4 how ETDC can be adapted better to specific vocabulary distributions.

It is possible to bound the compression performance of ETDC in terms of the text entropy or in terms of Huffman performance. Let E b be the average codeword length, measured in target symbols,3 using a b-ary ETDC (that is, using target symbols of b bits), and H b the same using a b-ary Huffman code. As ETDC is a prefix code and Huffman is the optimal prefix code, we have \(H_b \le E_b \). On the other hand, as ETDC uses all the combinations on b−1 bits (leaving the other for the flag), its codeword is shorter than H b−1, where sequences that are prefixes of others are forbidden. Thus \(H_b \le E_b \le H_{b-1} \). The average length of a Huffman codeword is smaller than one target symbol over the zero-order entropy of the text (Bell et al., 1990). Let \(\cal H \) be the zero-order entropy measured in bits. Thus, \({\cal H} \le b H_b < {\cal H}+b \), and the same holds for \(H_{b-1} \). We conclude that

$$ \frac{\cal H}{b} \le E_b < \frac{{\cal H}}{b-1}+1. $$

While the first inequality is obvious, the second tells us that the average number of bits used by a b-ary ETDC is at most \(\frac{b}{b-1}~{\cal H} + b \). It also means that \(E_b < \frac{b}{b-1}~H_b + 1 \), which upper bounds the coding inefficiency of ETDC with respect to a b-ary Huffman. Several studies about bounds on Dense Codes and b-ary Huffman codes applied to Zipf (1949) and Zipf-Mandelbrot (Mandelbrot, 1953) distributions can be found in Navarro and Brisaboa (2006) and Fariña (2005).

As shown in Section 6, ETDC improves Tagged Huffman Code compression ratio by more than 8%. Its difference with respect to Plain Huffman Code is just around 2.5%, much less than the rough upper bound just obtained. On the other hand, the encoding time with ETDC is just 40% below that of Plain Huffman Code, and one can search ETDC as fast as Tagged Huffman Code.

(s, c)-Dense Code

Instead of thinking in terms of tag bits, End-Tagged Dense Code can be seen as using \(2^{b-1} \) values, from 0 to \(2^{b-1}-1 \), for the symbols that do not end a codeword, and using the other \(\,2^{b-1} \) values, from \(2^{b-1} \) to \(2^b-1 \), for the last symbol of the codewords. Let us call continuers the former values and stoppers the latter. The question that arises now is whether that proportion between the number c of continuers and s of stoppers is optimal. That is, for a given text collection with a specific word frequency distribution, we want to use the optimal number of continuers and stoppers. Those will probably be different from s = c = 2b−1. Thus (s, c)-Dense Code is a generalization of ETDC, where any s+c = 2b can be used (in particular, the values maximizing compression). ETDC is actually a \((2^{b-1},2^{b-1}) \)-Dense Code.

This idea has been previously pointed out by Rautio et al. (2002). They presented an encoding scheme using stoppers and continuers on a character-based source alphabet, yet their goal is to have a code where searches can be efficiently performed. Their idea is to create a code where each codeword can be split into two parts in such a way that searches can be performed using only one part of the codewords.

Example 2 illustrates the advantages of using a variable rather than a fixed number of stoppers and continuers.

Example 2

Assume that 5,000 distinct words compose the vocabulary of the text to compress, and that b = 8 (byte-oriented code).

If End-Tagged Dense Code is used, that is, if the number of stoppers and continuers is 27 = 128, there will be 128 codewords of one byte, and the rest of the words would have codewords of two bytes, since 128 + 1282 = 16,512. That is, 16,512 is the number of words that can be encoded with codewords of one or two bytes. Therefore, there would be 16,512−5,000 = 11,512 unused codewords of two bytes.

If the number of stoppers chosen is 230 (so the number of continuers is 256 − 230 = 26), then 230 + 230 × 26 = 6,210 words can be encoded with codewords of only one or two bytes. Therefore all the 5,000 words can be assigned codewords of one or two bytes in the following way: the 230 most frequent words are assigned one-byte codewords and the remaining 5,000−230 = 4,770 words are assigned two-byte codewords.

It can be seen that words from 1 to 128 and words ranked from 231 to 5,000 are assigned codewords of the same length in both schemes. However words from 129 to 230 are assigned to shorter codewords when using 230 stoppers instead of only 128.

This shows that it can be advantageous to adapt the number of stoppers and continuers to the size and the word frequency distribution of the vocabulary.

Formalization

In this section, we formally define (s, c)-Dense Code and prove some of its properties. Formally, this section contains the material of Section 3, yet we have chosen to present ETDC first because it is more intuitively derived from the previous Tagged Huffman Code. We start by defining (s, c) stop-cont codes as follows.

Definition 2

Given positive integers s and c, a (s, c) stop-cont code assigns to each source symbol a unique target code formed by a sequence of zero or more digits in base c (that is, from 0 to c−1), terminated with a digit between c and c+s−1.

It should be clear that a stop-cont coding is just a base-c numerical representation, with the exception that the last digit is between c and c+s−1. Continuers are digits between 0 and c−1 and stoppers are digits between c and c+s−1. The next property clearly follows.

Property 1.

Any (s, c) stop-cont code is a prefix code.

Proof:

If one codeword were a prefix of the other, since the shorter codeword must have a final digit of value not smaller than c, then the longer codeword should have an intermediate digit which is not in base c. This is a contradiction.

Among all possible (s, c) stop-cont codes for a given probability distribution, (s, c)-Dense Code minimizes the average codeword length.

Definition 3

Given positive integers s and c, (s, c)-Dense Code ((s, c)-DC, or SCDC) is a (s, c) stop-cont code that assigns the i-th most frequent source symbol (starting with i = 0) to a codeword of k digits in base s+c (most significant digits first), where

$$ s~\frac{ c^{k-1} -1 }{c-1} \leq i < s\frac{ c^{k} -1}{c-1} $$

Let \(x = i - s~\frac{ c^{k-1} -1 }{c-1} \). Then, the first k−1 digits are filled with the representation of number \(\lfloor x/s \rfloor \) in base c, and the last digit is \(c + (x \ mod \ s) \).

To fix ideas, using bytes as symbols (s+c = 28), the encoding process can be described as follows:

  • One-byte codewords from c to c+s−1 are given to the first s words in the vocabulary.

  • Words ranked from s to s + sc −1 are assigned sequentially to two-byte codewords. The first byte of each codeword has a value in the range \(0,c-1\) and the second in the range \([c,c+s-1]\).

  • Words from \(s+sc \) to \(s+sc+sc^2-1 \) are assigned to three-byte codewords, and so on.

Table 3 summarizes this process. Next, we give an example of how codewords are assigned.

Table 3 Code assignment in (s, c)-Dense Code.

Example 3

The codewords assigned to twenty-two source symbols by a (2,6)-Dense Code are the following (from most to least frequent symbol): 〈6〉, 〈7〉, 〈0,6〉, 〈0,7〉, 〈1,6〉, 〈1,7〉, 〈2,6〉, 〈2,7〉, 〈3,6〉, 〈3,7〉, 〈4,6〉, 〈4,7〉, 〈5,6〉, 〈5,7〉 〈0,0,6〉, 〈0,0,7〉, 〈0,1,6〉, 〈0,1,7〉, 〈0,2,6〉, 〈0,2,7〉, 〈0,3,6〉, 〈0,3,7〉.

Notice that the code does not depend on the exact symbol probabilities, but only on their ordering by frequency. We now prove that the dense coding is an optimal stop-cont coding.

Property 2.

The average length of a \((s,c) \)-Dense Code is minimal with respect to any other \((s,c) \) stop-cont code.

Proof:

Let us consider an arbitrary (s, c) stop-cont code, and let us write all the possible codewords in numerical order, as in Table 3, together with the source symbol they encode, if any. Then it is clear that (i) any unused codeword in the middle could be used to represent the source symbol with longest codeword, hence a compact assignment of target symbols is optimal; and (ii) if a less probable source symbol with a shorter codeword is swapped with a more probable symbol with a longer codeword then the average codeword length decreases, and hence sorting the source symbols by decreasing frequency is optimal.

It is now clear from Definition 3 that ETDC is a (\(2^{b-1} \),\(2^{b-1} \))-DC, and therefore \((s,c) \)-DC is a generalization of ETDC, where s and c can be adjusted to optimize the compression for the distribution of frequencies and the size of the vocabulary. Moreover, (\(2^{b-1} \),\(2^{b-1} \))-DC (i.e. ETDC) is more efficient than Tagged Huffman Code over b bits, because Tagged Huffman Code is essentially a non-dense (\(2^{b-1} \),\(2^{b-1} \)) stop-cont code, while ETDC is a (\(2^{b-1} \),\(2^{b-1} \))-Dense Code.

Example 4

Table 4 shows the codewords assigned to a small set of words ordered by frequency when using Plain Huffman Code, (6, 2)-DC; End-Tagged Dense Code (which is a (4, 4)-DC), and Tagged Huffman Code. Digits of three bits (instead of bytes) are used for simplicity (b = 3), and therefore s+c = 8. The last four columns present the products of the number of bytes by the frequency of each word, and its sum (the average codeword length) is shown in the last row.

Table 4 Comparative example among compression methods, for b = 3.

It is easy to see that, for this example, Plain Huffman Code and the (6, 2)-Dense Code are better than the (4, 4)-Dense Code (ETDC) and they are also better than Tagged Huffman Code. A (6, 2)-Dense Code is better than a (4, 4)-Dense Code because it takes advantage of the distribution of frequencies and of the number of words in the vocabulary. However the values (6, 2) for s and c are not the optimal ones since a (7, 1)-Dense Code obtains, in this example, an optimal compressed text having the same result as Plain Huffman Code.

The problem now consists of finding the s and c values (assuming a fixed b where \(2^b= s+c \)) that minimize the size of the compressed text for a specific word frequency distribution.

Optimal s and c values

The key advantage of SCDC with respect to ETDC is the ability to use the optimal s and c values. In all the real text corpora used in our experiments (with s+c = 28), the size of the compressed text, as a function of s, has only one local minimum. Figure 1 shows compression ratios and file sizes as a function of s for two real corpora, ZIFF and AP.4 It can be seen that a unique minimum exists for each collection, at s = 198 in ZIFF and s = 189 in AP. The table details the sizes and compression ratios when values of s close to the optimum are used.

Fig. 1
figure 1

Compressed text sizes and compression ratios for different s values

The behavior of the curves is explained as follows. When s is very small, the number of high frequency words encoded with one byte is also very small (s words are encoded with one byte), but in this case c is large and therefore many words with low frequency will be encoded with few bytes: sc words will be encoded with two bytes, sc 2 words with 3 bytes, sc 3 with 4 bytes, and so on. From that point, as s grows, we gain compression in more frequent words and lose compression in less frequent words. At some later point, the compression lost in the last words is larger than the compression gained in words at the beginning, and therefore the global compression ratio worsens. That point gives us the optimal s value. Moreover, Fig. 1 shows that, in practice, the compression is relatively insensitive to the exact value of s around the optimal value.

If one assumes the existence of a unique minimum, a binary search strategy permits computing the best s and c values in O(b) steps. At each step, we check whether we are in the decreasing or increasing part of the curve and move towards the decreasing direction. However, the property of the existence of a unique minimum does not always hold. An artificial distribution with two local minima is given in Example 5.5

Example 5

Consider a text with N = 50,000 words, and n = 5,000 distinct words. An artificial distribution of the probability of occurrence p i for all words 0 i < n in the text is defined as follows:

$$ p_i = \left\{ \begin{array}{@{}l@{\quad}l@{\quad}l@{}} 0.4014 &{\rm if} ~~ i=0&\\[3pt] 0.044 &{\rm if} ~~ i \in [1..9] &\\[3pt] 0.0001 &{\rm if} ~~ i \in [10..59] &\\[3pt] 0.00004 &{\rm if} ~~ i \in [60..4999] \end{array}\right. $$

If the text is compressed using \((s,c) \)-Dense Code and assuming that b = 4 (therefore, \(s+c=2^b=16 \)), the distribution of the size of the compressed text depending on the value of s used to encode words has two local minima, at \(s=c=8 \) and at \(s=10,~c=6 \). This situation can be observed in Table 5.

Table 5 Size of compressed text for an artificial distribution.

Therefore, a binary search does not always guarantee that we will find the optimum s value, and a sequential search over the 2b possible values is necessary. In practice, however, all the text collections we have considered do have a unique minimum and thus permit binary searching for the optimum s value. Section 4.3 explains why this property is expected to hold in all real-life text collections.

Let us consider how to find the optimum s value, either by binary or sequential search. Assume \(s+c=2^b \) and \(c>1 \) in the following. As \(sc^{k-1} \) different source symbols can be encoded using k digits, let us call

$$ W_k^s = \sum_{j=1}^k sc^{j-1} = s \frac{{c^k -1}}{{c-1}} $$
((1))

(where \(W_0^s=0 \)) the number of source symbols that can be encoded with up to k digits. Let us consider an alphabet of n source symbols, where p i is the probability of the i-th most frequent symbol (i = 0 for the first symbol) and \(p_i=0 \) if \(i \ge n \). Let us also define

$$ P_j = \sum_{i=0}^j p_i $$

as the sum of \(p_i \) probabilities up to \(j \). We also need

$$ f_k^s = \sum_{i=W_{k-1}^s} ^{W_k^s-1} p_i = P_{W_k^s-1} - P_{W_{k-1}^s -1} $$

the sum of probabilities of source symbols encoded with k digits by (s, c)-DC and

$$ F_k^s = \sum_{i=0}^{W_k^s-1} p_i = P_{W_k^s-1} $$

the sum of probabilities of source symbols encoded with up to k digits by \((s,c) \)-DC. Then, the average codeword length, \(L(s,c) \), for \((s,c) \)-DC is

$$ \displaylines{ L(s,c) = \sum_{k=1}^{K^s} k~f_k^s = \sum_{k=1}^{K^s} k~\big(F_k^s-F_{k-1}^s\big) = \sum_{k=0}^{K^s-1} (k-(k+1))~F_k^s ~+~ K^s F_{K^s}^s \cr = K^s - \sum_{k=0}^{K^s-1} F_k^s = \sum_{k=0}^{K^s-1} \big(1-F_k^s\big) }$$
((2))

where \(K^s= \lceil \log_c (1+\frac{n(c-1)}{s})\rceil \) is the maximum codeword length used by the code, and thus \(F_{K^s}^s=1 \).

Thus, if we precompute all the sums \(P_j \) in \(O(n) \) time, we can compute \(L(s,c) \) in \(O(K^s) = O(\log_c n) \) time. Therefore, a binary search requires \(O(n+b\log n) \) and a sequential search requires \(O(n+2^b\log n) \) time. We can assume \(2^b \le n \), as otherwise all the n codewords fit in one symbol. This makes the binary search complexity O(n), while the sequential search complexity can be as large as \(O(n \log n) \) if b is close to \(\log n \).

The case \(c=1 \) is hardly useful in practice: \(W_k^s = ks \), \(K^s = \lceil n/s \rceil \), and \(L(s,c) \) is computed in \(O(n) \) time.

Uniqueness of the minimum

As explained, only one minimum appeared in all our experiments with b = 8 and real collections. In this section, we show that this is actually expected to be the case in all practical situations. We remark that we are not going to give a proof, but just intuitive arguments of what is expected to happen in practice.

Heaps Law (Heaps., 1978) establishes that \(n = O(N^\beta) \), where n is the vocabulary size, N the number of words in the text, and β a constant that depends on the text type and is usually between 0.4 and 0.6 (Beaza-Yates andRibeirs-Ribeiro-Neto. 1999). This means that the vocabulary grows very slowly in large text collections. In fact, in a corpus obtained by joining different texts from TREC-2 and TREC-4 (see Section 6), which adds up to 1 gigabyte, we find only 886,190 different words. This behavior, combined with the fact that we use bytes (b = 8) as target symbols, is the key to the uniqueness of the minimum in practice.

Table 6 shows how the \(W_k^s \) (Eq. (1)) evolve, with b = 8. The maximum number of words that can be encoded with 6 bytes is found when the value of s is around 40. In the same way, the maximum number of words that can be encoded with 5, 4, and 3 bytes is reached when the value of s is respectively around of 50, 60 and 90. Finally, the value of s that maximizes the number of words encoded with 2 bytes is s = c = 128, but the number of words encoded with just one byte grows when s is increased.

Table 6 Values of \(W_k^s\) for \(1 \le k \le 6 \).

Notice that compression clearly improves, even if a huge vocabulary of 2 million words is considered, when s increases from s = 1 until s = 128. This is because, up to s = 128, the number of words that can be encoded with 1 byte and with 2 bytes both grow, while the number that can be encoded with 3 bytes stays larger than 2 million. Only larger vocabularies can lose compression if s grows from s = 90 (where 2.5 million words can be encoded) up to s = 128. This happens because those words that can be encoded with 3-byte codewords for s = 90, would need 4-byte codewords when s increases. However, as it has been already pointed out, we never obtained a vocabulary with more than 886,190 words in all the real texts used, and that number of words is encoded with just 3 bytes with any s ≤ 187.

Therefore, in our experiments (and in most real-life collections) the space trade-off depends on the sum of the probability of the words encoded with only 1 byte, against the sum of the probability of words encoded with 2 bytes. The remaining words are always encoded with 3 bytes.

The average codeword length for two consecutive values of s are (Eq. (2))

$$\displaylines{ L(s,c) = 1 + \sum_{i\ge W_1^s} ~p_i + \sum_{i\ge W_2^s} p_i, \cr L(s+1,c-1)= 1 + \sum_{i\ge W_1^{s+1}} ~p_i + \sum_{i\ge W_2^{s+1}} p_i. } $$

Two different situations arise depending on whether s > \(s\leq c \) or sc. When s < c the length \(L(s,c) \) is always greater than \(L(s+1,c-1) \) because the number of words that are encoded with both 1 and 2 bytes grows when s increases. Therefore, as the value of \(s=c=128 \) is increased, compression improves until the value s = c = 128 is reached. For s values beyond \(s=c \) (s > c), compression improves when the value s+1 is used instead of s iff \(L(s+1,c-1)<L(s,c) \), that is,

$$ \sum_{i=W_1^s}^{W_1^{s+1}-1} p_i > \sum_{i= W_2^{s+1}}^{W_2^s-1} p_i, ~~~~ \textrm{which~ is} ~~~~~~ p_s > \sum_{i=sc+c}^{s+{sc}-1} p_i. $$

For each successive value of s, p s clearly decreases. On the other hand, \(\sum_{sc +c}^{sc+s-1} p_i \) grows. To see this, since the \(p_i \) values are decreasing, it is sufficient to show that each interval \([W_2^{s+1},~W_2^{s}-1] \) is longer than the former and it ends before the former starts (so it contains more and higher \(p_i \) values). That is: \((a) \) \(W_2^s-W_2^{s+1} > W_2^{s-1}-W_2^s \), and \((b) \) \(W_2^{s+1} < W_2^{s} \). It is a matter of algebra to see that both hold when \(s \ge c \). As a consequence, once s reaches a value such that \(p_s \leq \sum_{i= W_2^{s+1}}^{W_2^s-1} p_i \), successive values of s will also produce a loss of compression. Such loss of compression will increase in each successive step. Hence only one local minimum will exist.

The argument above is valid until s is so large that more than 3 bytes are necessary for the codewords, even with moderate vocabularies. For example, for \(s=230 \) we will need more than 3 bytes for n as low as 161,690, which is perfectly realistic. When this happens, we have to take into account limits of the form \(W_3^s \) and more, which do not have the properties \((a) \) and \((b) \) of \(W_2^s \). Yet, notice that, as we move from s to s+1, compression improves by p s (which decreases with s) and it deteriorates because of two facts: (1) fewer words are coded with 2 bytes; (2) fewer words are coded with k bytes, for any \(k \ge 3 \). While factor (2) can grow or reduce with s, factor (1) always grows because of properties (a) and (b) satisfied by W 2 s. Soon the losses because of factor (1) are so large that the possible oscillations due to factor (2) are completely negligible, and thus local minima do not appear anyway.

To illustrate the magnitudes we are considering, assume the text satisfies Zipf (1949) Law with θ = 1.5, which is far lower than the values obtained in practice (Baeza-Yates and Ribeiro-Neto 1999). If we move from s = 230 to s+1 = 231, compression gain is p s < 0.00029. The loss just because of \(W_2^s \) is \(> 0.00042 \), and the other \(W_k^s \) (\(k \ge 3) \) add almost other 0.00026 to the loss. So, no matter what happens with factor (2), factor (1) is strong enough to ensure that compression will deteriorate for s > 230.

Encoding, decoding, and searching

Encoding, decoding, and searching is extremely simple in ETDC and SCDC. We give in this section the algorithms for general SCDC. The case of ETDC can be considered as a particular case of SCDC taking the value of s as 2b−1 (128 in our case). We will make clear where the particular case of ETDC yields potentially more efficiency.

Encoding algorithm

Encoding is usually done in a sequential fashion as shown in Table 7, in time proportional to the output size. Alternatively, on-the-fly encoding of individual words is also possible. Given a word rank i, its k-byte codeword can be easily computed in O(k) = O(log c i) time.6

Table 7 Comparison of compression ratio among semistatic compressors. Vocabulary sizes are excluded.

There is no need to store the codewords (in any form such as a tree) nor the frequencies in the compressed file. It is enough to store the plain words sorted by frequency and the value of s used in the compression process. Therefore, the vocabulary will be basically of the same size as using Huffman codes and canonical trees.

Figure 2 gives the sequential encoding algorithm. It computes the codewords for all the words in the sorted vocabulary and stores them in a vector code.

Fig. 2
figure 2

Sequential encoding process. It receives the vocabulary size n, and the (s, c) parameters, and leaves the codewords in code

Notice that, when using ETDC, since \(s=c=128 \), the operations \(mod~ s \), \(mod~ c \), \(div~ s \), \(div~ c \), \(+~ c \) and \(\times~ c \) can be carried out using faster bitwise operations. As shown in Section, this yield better encoding times.

Figure 3 presents the on-the-fly encoding algorithm. It outputs the bytes of the codeword one at a time from right to left, that is, the least significant bytes first.

Fig. 3
figure 3

On-the-fly encoding process. It receives the word rank i, and the (s, c) parameters, and outputs the codewords in reverse order

Decoding algorithm

The first step of decompression is to load the words that compose the vocabulary to a vector. Those are already sorted by frequency.

In order to obtain the word that corresponds to a given codeword, the decoder runs a simple computation to obtain, from the codeword, the rank of the word i. Then it outputs the i-th vocabulary word. Figure 4 shows the decoding algorithm. It receives a codeword x, and iterates over each byte of x. The end of the codeword can be easily recognized because its value is not smaller than c. After the iteration, the value i holds the relative rank of the word among those of k bytes. Then \(base = W_{k-1}^s \) is added to obtain the absolute rank. Overall, a codeword of k bytes can be decoded in \(O(k)=O(\log_c i) \) time.7

Fig. 4
figure 4

On-the-fly decoding process. It receives a codeword x, and the (s, c) parameters, and returns the word rank. It is also possible to have base precomputed for each codeword size

Searching

The essential idea of searching a semistatic code is to search the compressed text for the compressed pattern P. However, some care must be exercised to make sure that the occurrences of codeword P only correspond to whole text codewords. Specifically, we must ensure that the following situations do not occur: \((i) \) \(P \) matches the prefix of a text codeword A; (ii) P matches the suffix of a text codeword A; (iii) P matches strictly within a text codeword A; (iv) P matches within the concatenation of two codewords A:B or more.

Case (i) cannot occur in prefix codes (such as Huffman or our Dense Codes). However, cases (ii) to (iv) can occur in Plain Huffman Code. This is why searches on Plain Huffman Code must proceed byte-wise so as to keep track of codeword beginnings. Tagged Huffman Code, on the other hand, rules out the possibility of situations (ii) to (iv) thanks to the flag bit that distinguishes codeword beginnings. Hence Tagged Huffman Code can be searched without any care using any string matching algorithm.

End-Tagged Dense Code, on the other hand, uses the flag bit to signal the end of a codeword. (s, c)-Dense Code does not have such a tag, yet a value \(\ge c \) serves anyway to distinguish the end of a codeword. It is easy to see that situations (iii) and (iv) cannot arise in this case, yet case (ii) is perfectly possible. This is because Dense Codes are not suffix codes, that is, a codeword can be a suffix of another.

Yet, with a small modification, we can still use any string matching algorithm (in particular, the Boyer-Moore family, which permits skipping text bytes). We can just run the search algorithm without any care and, each time a matching of the whole pattern P occurs in the text, we check whether the occurrence corresponds to a whole text codeword or to just a suffix of a text codeword. For this sake, it is sufficient to check whether the byte preceding the first matched byte is \(\ge c \) or not. Figure 5 shows an example of how false matchings can be detected (using “bytes" of three bits and ETDC). Note that Plain Huffman Code does not permit such simple checking.

Fig. 5
figure 5

Searching using End-Tagged Dense Code

This overhead in searches is negligible because checking the previous byte is only needed when a matching is detected, which is infrequent. As shown in Section 6.4, this small disadvantage with respect to Tagged Huffman Code (which is both a prefix and a suffix code) is compensated because the size of the compressed text is smaller with Dense Codes than with Tagged Huffman Code.

Figure 6 gives a search algorithm based on Horspool's variant of Boyer-Moore (Horspool, 1980; Navarro and Raffnot, 2002). This algorithm is especially well suited to this case (codewords of length at most 3–4, characters with relatively uniform distribution in {0… 255}).

Fig. 6
figure 6

Search process using Horspool's algorithm. It receives a codeword x and its length k, parameter c, the compressed text T to search, and its length z. It outputs all the text positions that start a true occurrence of x in T 7.

Empirical results

We present in this section experimental results on compression ratios, as well as speed in compression, decompression, and searching. We used some text collections from TREC-2:8 AP Newswire 1988 (AP) and Ziff Data 1989–1990 (ZIFF); and from TREC-4: Congressional Record 1993 (CR) and Financial Times 1991 to 1994 (FT91 to FT94). In addition, we included the small Calgary corpus9 (CALGARY), and two larger collections: ALL_FT aggregates corpora FT91 to FT94, and ALL aggregates all our corpora.

We compressed the corpora with Plain Huffman Code (PHC), Tagged Huffman Code (THC), our End-Tagged Dense Code (ETDC) and our (s, c)-Dense Code (SCDC), using in all cases bytes as the target symbols (b = 8). In all cases, we defined words as maximal contiguous sequences of letters and digits, we distinguished upper and lower case, and we adhered to the spaceless word model (Moura et al., 2000). Under this model, words and separators share a unique vocabulary, and single-space separators are assumed by default and thus not encoded (i.e., two contiguous codewords representing words imply a single space between them).

We also include comparisons against the most competitive classical compressors. These turn out to be adaptive techniques that do not permit direct access nor competitive direct searching, even compared to PHC. These are Gnu gzip,10 a Ziv-Lempel compressor (Ziv and Lampel, 1977); Seward's bzip2,11 a compressor based on the Burrows-Wheeler transform (Burrows and Wheeler, 1994; and Moffat's arith,12 a zero-order word-based modeler coupled with an arithmetic coder (Carpinelli et al., 1999). Gzip and bzip2 have options-f where they run faster, and -b, where they compress more.

Compression ratio

Table 7 shows the compression ratios obtained by the four semistatic techniques when compressing the different corpora. The second column gives the original size in bytes. Columns 3 to 6 (sorted by compression performance) give the compression ratios for each method. Column 7 shows the (small) loss of compression of SCDC with respect to PHC, and the last column shows the difference between SCDC and ETDC. The fourth column, which refers to SCDC, gives also the optimal (s, c) values.

We excluded the size of the compressed vocabulary in the results. This size is negligible and similar in all cases, although a bit smaller in SCDC and ETDC because only the sorted words are needed.

PHC obtains the best compression ratio (as expected from an optimal prefix code). ETDC always obtains better results than THC, with an improvement of 7.7%–9.0%. SCDC improves ETDC compression ratio by 1.7%–2.3%, and it is worse than the optimal PHC only by 0.49%–1.06% (being 0.6% in the whole corpus ALL). Therefore, our Dense Codes retain the good searchability and random access of THC, but their compression ratios are much closer to those of the optimum PHC.

Table 8 compares ETDC and SCDC against adaptive compressors. This time we have added the size of the vocabulary, which was compressed with classical (character oriented, bit-based) Huffman. It can be seen that SCDC always compresses more than gzip-b, except on the very small Calgary file. The improvement in compression ratio is between 7% and 14% except on the ZIFF collection. The other compressors, however, compress more than SCDC by a margin of 6% to 24% (even excluding the short Calgary file). We remind that these compressors do not permit local decompression nor direct search. We also show next that they are very slow at decompression.

Table 8 Comparison of compression ratio against adaptive compressors.

Compression time

In this section we compare the Dense and Plain Huffman encoding phases and measure the code generation time. We do not include THC because it works exactly as PHC, with the only difference of generating 2b−1-ary trees instead of 2b-ary trees as PHC. This causes some loss in encoding speed. We also include the adaptive compressors.

Fig. 7
figure 7

Vocabulary extraction and encoding phases

The model used for compressing a corpus in our experiments is described in Fig. 7. It consists of three phases.

  1. 1.

    Vocabulary extraction. The corpus is processed once in order to obtain all distinct words in it, and their number of occurrences (n). The result is a list of pairs (word, frequency), which is then sorted by frequency. This phase is identical for PHC, SCDC, and ETDC.

  2. 2.

    Encoding. Each vocabulary word is assigned a codeword. This process is different for each method:

    • The PHC encoding phase is the application of the Huffman technique Moffat and Katajainen, 1995, Moffat and Turpin 1996; Huffman 1952). Encoding takes O(n) time overall.

    • The SCDC encoding phase has two parts: The first computes the list of accumulated frequencies and searches for the optimal s and c values. Its cost is O(n) in practice (Sections 4.2 and 4.3). After obtaining the optimal s and c values, sequential encoding is performed (Fig. 2). The overall cost is O(n).

    • The encoding phase is even simpler in ETDC than in SCDC, because ETDC does not have to search for the optimal s and c values (as they are fixed to 128). Therefore only the sequential code generation phase is performed. It costs O(n) time overall.

    In all cases, the result of the encoding section is a hash table of pairs (word, codeword).

  3. 3.

    Compression. The whole source text is processed again. For each source word, the compression process looks for it in the hash table and outputs its corresponding codeword.

Note that both PHC and SCDC encoding phases run in linear time. However, Huffman's constant is in practice larger because it involves more operations than just adding up frequencies.

Given that the vocabulary extraction phase, the process of building the hash table of pairs, and the compression phase are identical in PHC, ETDC and SCDC, we first measure only code generation time (T 1T 0 in Fig. 7), to appreciate the speedup due to the simplicity of Dense Codes. We then measure also the overall compression time, to assess the impact of encoding in the whole process and to compare against adaptive compressors.

Encoding time

Table 9 shows the results on code generation time. A dual Intel® pentium®-III 800 Mhz system, with 768 SDRAM-100 Mhz was used in our tests. It ran Debian GNU/Linux (kernel version 2.2.19). The compiler was gcc version 2.95.2 with -09 compiler optimizations. The results measure encoding user time.

Table 9 Code generation time comparison.

The second column gives the corpus size measured in words, while the third gives the number of different words (vocabulary size). Columns 4–6 give encoding time in milliseconds for the different techniques (from faster to slower). Column 7 measures the gain of ETDC over SCDC, and the last the gain of SCDC over PHC.

ETDC takes advantage of its simpler encoding phase with respect to SCDC, to reduce its encoding time by about 25%. This difference corresponds to two factors. One is that, when using ETDC, s is 128, and then bitwise operations (as shown in Section 5.1) can be used. Another factor is the amount of time needed to compute the optimal s and c values, which corresponds mainly to the process of computing the vector of accumulated frequencies. With respect to PHC, ETDC decreases the encoding time by about 60%. Code generation is always about 45% faster for SCDC than for PHC.

Although the encoding time is in all methods O(n) under reasonable conditions, ETDC and SCDC perform simpler operations. Computing the list of accumulated frequencies and searching for the best (s, c) pair only involve elementary operations, while the process of building a canonical Huffman tree has to deal with the tree structure.

Overall compression time

Section 6.2.1 only shows the encoding time, that is, the time to assign a codeword to each word in the sorted vocabulary. We now consider the whole compression time. Table 10 shows the compression times in seconds for the four techniques and for the adaptive compressors.

Table 10 Compression time comparison (in seconds).

Observe that the two passes over the original file (one for vocabulary extraction and the other to compress the file) take the same time with all semistatic techniques. These tasks dominate the overall compression time, and blur out most of the differences in encoding time. The differences are in all cases very small, around 1%.

It is interesting that the time to write the compressed file benefits from a more compact encoding, as it has to write less data. Therefore, a technique with fastest encoding like ETDC is harmed in the overall time for its extra space with respect to SCDC and PHC.

As a result, PHC is faster in the largest files, since it has to write a shorter output file. However, in shorter files, where the size of the output file is not so different and the vocabulary size is more significant, ETDC is faster due to its faster encoding time. SCDC is in between and it is the fastest in some intermediate size files. THC, on the other hand, is consistently the slowest, as it is slow at encoding and produces the largest output files.

With respect to adaptive compressors, we can see that all PHC, THC, ETDC and SCDC, are usually 10%–20% faster than gzip (even considering its fastest mode), and 2.5–6.0 times faster than bzip2 and arith compression (only those two defeat ETDC and SCDC compression ratios).

Decompression time

Decoding is almost identical for PHC, THC, ETDC and SCDC. Such a process starts by loading the words of the vocabulary into a vector V. For decoding a codeword, SCDC needs the s value used in compression, and PHC and THC also need to load two vectors: base and first (Moffat and Katajainen, 1995). Next, the compressed text is read and each codeword is replaced by its corresponding uncompressed word. Since it is possible to detect the end of a codeword by using the s value in the case of SCDC (128 when ETDC is used) and the first vector in the case of PHC and THC, decompression is performed codeword-wise. Given a codeword C, a simple decoding algorithm obtains the position i such that V[i] is the uncompressed word that corresponds to codeword C. The decompression process takes O(z) time (where z is the size of the compressed text).

Table 11 gives overall decompression time in seconds for the four semistatic techniques and for adaptive compressors. Again, in addition to the simplicity of the decoding process, we must consider the penalty posed by codes with worse compression ratios, as they have to process a longer input file.

Table 11 Decompression time comparison (in seconds).

We can see again that the times for PHC, THC, ETDC, and SCDC are very similar. The differences are under 1.5 seconds even when decompressing 1 gigabyte corpora. It is nevertheless interesting to comment upon some aspects. PHC benefits from the smaller size of the compressed file. ETDC processes a larger file, yet it is faster than PHC in many cases thanks to its simpler and faster decoding. SCDC is between both in compression ratio and decoding cost, and this is reflected in its overall decompression time, where it is never the fastest but it is intermediate in many cases. THC is again the slowest method, as its input file is the largest and its decoding algorithm is as slow as that of PHC.

These four compressors, on the other hand, are usually 10%–20% faster than gzip (even using its fastest mode), and 7–9 times faster than bzip2 and arith.

Search time

As explained in Section 5.3, we can search on ETDC and SCDC using any string matching algorithm provided we check every occurrence to determine whether it is a valid match. The check consists of inspecting the byte preceding the occurrence (Fig. 6).

In this section, we aim at determining whether this check influences search time or not, and in general which is the performance of searching ETDC and SCDC compared to searching THC. Given a search pattern, we find its codeword and then search the compressed text for it, using Horspool's algorithm (Horspool, 1980). This algorithm is depicted in Fig. 6, where it is adapted to ETDC and SCDC. In the case of searching THC, line (7) is just “if j < 0 output i”.

Comparing methods that encode the same word in different ways is not simple. The time needed by Horspool's algorithm is inversely proportional to the length of the codeword sought. If one method encodes a word with 2 bytes and the other with 3 bytes, Horspool will be much faster for the second method.

In our first experiment, we aim at determining the relative speed of searching ETDC, SCDC, and THC, when codewords of the same length are sought (classical compressors are not competitive at searching and hence left out of this experiment). In order to isolate the length factor, we carefully chose the most and least frequent words that are assigned 1, 2, and 3 byte codewords with the three methods simultaneously. ETDC and SCDC never had codewords longer than 3 bytes, although THC has. We consider later the issue of searching for “random” codewords.

Table 12 compares the search times in the ALL corpus. The first three columns give the length of the codeword, the most and least frequent words chosen for that codeword length, and the number of occurrences of those words in the text. The frequency of the words is important because ETDC and SCDC must check every occurrence, but THC must not. Columns 4–7 give the search times in seconds for the three techniques (fastest to slowest). The last two columns give the decrease in search time of SCDC with respect to ETDC and THC.

Table 12 Searching time comparison.

It can be seen that searching ETDC for a k-bytes codeword is 2%–5% faster than searching THC. Even when searching for “\(the\)”, which has one true occurrence every 42 bytes on average, the search in ETDC is faster. This shows that the extra comparison needed in ETDC is more than compensated by its better compression ratio (which implies that a shorter file has to be traversed during searches). The same search algorithm gives even better results in SCDC, because the compression ratio is significantly better. Searching SCDC is 2.5%–7% faster than searching ETDC, and 5%–10% faster than searching THC.

Let us now consider the search for random words. As expected from Zipf (1949) Law, a few vocabulary words account for 50% of the text words (those are usually stopwords: articles, prepositions, etc.), and a large fraction of vocabulary words appear just a few times in the text collection. Thus picking the words from the text and from the vocabulary is very different. The real question is which is the distribution of searched words in text databases. For example, stopwords are never searched for in isolation, because they bring a huge number of occurrences and their information value is null (Baeza-Yates and Ribeiro-Neto, 1999). On the other hand, many words that appear only once in our text collections are also irrelevant as they are misspellings or meaningless strings, thus they will not be searched for either.

It is known that the search frequency of vocabulary words follows a Zipf distribution as well, which is not related to that of the word occurrences in the text (Baeza-Yates and Navarro, 2004). As a rough approximation, we have followed the model (Moura et al., 2000) where each vocabulary word is sought with uniform probability. Yet, we have discarded words that appear only once, trying to better approximate reality. As the search and occurrence distributions are independent, this random-vocabulary model is reasonable.

The paradox is that this model (and reality as well) favors the worst compressors. For example, THC has many 4-byte codewords in our largest text collections, whereas ETDC and SCDC use at most 3 bytes. On large corpora, where THC uses 4 bytes on a significant part of the vocabulary, there is a good chance of picking a longer codeword with THC than with ETDC and SCDC. As a result, the Horspool search for that codeword will be faster on THC.

More formally, the Horspool search for a codeword of length m in a text of N words, for which we achieved c bytes/word compression, costs essentially Nc/m byte comparisons. Calling \(m_{THC} \) and \(m_{SCDC} \) the average codeword lengths (of words randomly chosen from the vocabulary) in both methods, and \(c_{THC} \) and \(c_{SCDC} \) the bytes/word compressions achieved (that is, average codeword length in the text), then searching THC costs \(Nc_{THC}/m_{THC} \) and searching SCDC costs \(Nc_{SCDC}/m_{SCDC} \). The ratio of search time of THC divided by SCDC is \(\frac{c_{THC}/c_{SCDC}}{m_{THC}/m_{SCDC}} \). Because of Zipf Law, the ratio \(m_{THC}/m_{SCDC} \) might be larger than \(c_{THC}/c_{SCDC} \), and thus searching THC can be faster than searching SCDC.

Table 13 shows the results of searching for (the same set of) 10,000 random vocabulary words, giving mean and standard deviation for each method.

Table 13 Searching for random patterns.

The results are as expected after the discussion. Searching THC is up to 13.7% faster on very long text collections (ALL_FT and ALL), thanks to the many 4-byte codeword assignments it makes. On the other hand, SCDC is 5.0%–6.5% faster than THC on medium-size text collections. Even ETDC is 2%–5% faster than THC on those texts.

The next experiment compares multi-pattern searching on a text compressed with ETDC and SCDC against those searches on uncompressed text. To search the compressed text we applied the Set Horspool's algorithm (Horspool, 1980; Navarro and Raffinot, 2002), with the small modifications needed to deal with our codes. Three different algorithms were tested to search the uncompressed text: (i) our own implementation of the Set Horspool's algorithm, (ii) author's implementation of Set Backward Oracle Matching algorithm (SBOM) (Allauzen et al. 1999), and (iii) the agrep software (Wu and Manbar, 1992a, b), a fast approximate pattern-matching tool which allows, among other things, searching a text for multiple patterns. Agrep searches the text and returns those chunks containing one or more searched patterns. The default chunk is a line, as the default chunk-separator is the newline character. Once the first search pattern is found in a chunk, agrep skips processing the remaining bytes in the chunk. This speeds up agrep searches when a large number of patterns are searched. However, it does not give the exact positions of all the searched patterns. To make a fairer comparison, in our experiments, we also tried agrep with the reversed patterns, which are unlikely to be found. This maintains essentially the same statistics of the searched patterns and reflects better the real search cost of agrep.

By default, the search tools compared in our experiments (except agrep) run in silent mode, and count the number of occurrences of the patterns in the text. Agrep was forced to use these two options by setting the parameters -s -c.

Table 14 shows the average time (in seconds) needed to search for 1000 sets of random words. To choose these sets of random words we considered the influence of the length of the words in searches (a longer pattern is usually searched for faster) by separately considering words of length 5, 10, or greater than 10. With respect to the number of words in each set, we considered several values from 5 to 1000.

Table 14 Multi-pattern searches over the ALL corpora (in seconds).

The results clearly show that searching compressed text is much faster than searching the uncompressed version of the text. Only default agrep (which skips lines where patterns are found) can improve the results of compressed searches, yet this occurs when more than 100 words are searched for and, as explained, does not reflect the real cost of a search. In the case of agrep with reversed patterns, it is interesting to distinguish two cases: (i) Searching for inverted patterns of length greater or equal than 10. Those inverted patterns do never occur in the text. Therefore, search time worsens as the number of searched patterns increases. (ii) Searching for inverted patterns whose length is 5. In this case, some of the inverted patterns are found in the text, and the probability of finding the searched patterns increases as the number of searched patterns grows. This fact, explains that search time improves when the number of search patterns is large ( > 100 patterns).

If we focus on our implementations of the Set Horspool's algorithm applied to both compressed and plain text (which is the fairest comparison between searches in compressed and plain text), we see that searching compressed text is around 3–5 times faster than searching plain text.

It is also noticeable that compressed search time does not depend on the length of the uncompressed pattern (in fact, the small differences shown are related to the number of occurrences of the searched patterns). On the other hand, searching plain text for longer patterns is faster, as it permits skipping more bytes during the traversal of the searched file.

Moura et al. (2000) showed that if searchers allowing errors are considered, then searching text compressed with THC (which has roughly the same search times as our Dense Codes) can be up to 8 times faster than searching the uncompressed version of the text.

We have left aside from this comparison the block-boundary method proposed by Moura et al. (2000) to permit Horspool searching on PHC. In the experiments they report, that method is 7% slower than THC on medium-size collections (where SCDC is at least 5% faster than THC), and the block-alignment poses a space overhead of 0.78% over PHC (where SCDC overhead is around 0.60%). Thus SCDC is even closer to the optimum PHC compression ratio and 12%–19% faster. On large corpora, the same codeword length effect that affects SCDC will make the search on PHC even slower, as those codewords are shorter on average.

Conclusions and future work

We have presented End-Tagged Dense Code (ETDC) and (s, c)-Dense Code (SCDC), two statistical semistatic word-based compression techniques suitable for text databases. SCDC is a generalization of ETDC which improves its compression ratio by adapting the number of stopper/continuer values to the corpus to be compressed.

Although the best semistatic compression ratio is achieved with Huffman coding, different variants that lose some compression in exchange for random access and fast searching for codewords are preferable in compressed text databases. In particular, Tagged Huffman Code (Moura et al., (2000) has all those features in exchange for 11% compression loss compared to Plain Huffman Code.

We have shown that ETDC and SCDC maintain all the good searching features of Tagged Huffman Code, yet their compression ratio is much closer to that of the optimum Plain Huffman Code (just 0.6% off). In addition, ETDC and SCDC are much simpler and faster to program, create, and manipulate.

Figure 8(a) summarizes compression ratio, encoding time, compression and decompression time, and search time for the semistatic word-based statistical methods. In the figure, the measures obtained in the AP corpus are shown normalized to the worst value. The lower the value in a bar, the better the result.

Fig. 8
figure 8

Comparison of Dense Codes against other compressors on corpus AP

Using the same corpus and format, Fig. 8(b) compares Dense Codes against other popular compressors: gzip, bzip2, and arith. We recall that gzip and bzip2 have options “-f“ (fast compression) and “-b” (best compression).

Figure 9 presents the comparison in terms of space/time tradeoffs for encoding and search time versus compression ratio (overall compression/decompression times are very similar in all cases). The figure illustrates that, while Plain Huffman Code remains interesting because it has the best compression ratio, Tagged Huffman Code has been overcome by both ETDC and SCDC in all concerns: compression ratio, encoding speed, compression and decompression speed, and search speed. In addition, Dense Codes are simpler to program and manipulate. We also note that more complex searches (such as regular expression or approximate searching) can be handled with ETDC or SCDC just as with Plain Huffman Code (Moura et al., 2000), by means of a byte-wise processing of the text.

Fig. 9
figure 9

Space/time tradeoffs of Dense and Huffman-based codes on corpus AP

Fig. 10
figure 10

Space/time tradeoffs of Dense Codes versus adaptive compressors, on corpus AP

Figure 10 compares ETDC and SCDC against other popular compressors in terms of space/time tradeoffs for compression and decompression versus compression ratio. It can be seen that gzip is overcome by our compressors in all aspects: ETDC and SCDC obtain better compression ratio, compression time, and decompression time, than gzip. Although arith and bzip2 compress up to 20% more than ETDC and SCDC, these are 2.5–6 times faster to compress and 7–9 times faster to decompress. This shows that, even disregarding the fact that these classical compressors cannot perform efficient local decompression or direct text search, our new compressors are an attractive alternative even from a pure-compression point of view. We remark that these conclusions are only true if one is compressing not-too-small (say, at least 10 megabytes) natural language text collections.

ETDC and SCDC can be considered the first two members of a newborn family of dense compressors. Dense compressors have a number of possibilities in other applications. For example, we have faced the problem of dealing with growing text collections. Semi-static codes have an important drawback for compressed text databases: the whole corpus being compressed has to be available before compression starts. As a result, a compressed text database must periodically recompress all its text collections to accommodate new text that has been added, or accept a progressive degradation of compression ratio because of changes in the distribution of its collections. We have some preliminary results on a semistatic compressor based on Dense Codes (Brishoa et al., 2005a). The idea is that, instead of changing any codeword assignment when a word increases its frequency, we concatenate its new occurrence with the next word and consider the pair as a new source symbol. Instead of using shorter codewords for more frequent symbols, we let codewords encode more source symbols when those appear. This is easy to do thanks to the simplicity of Dense Codes.

This leads naturally to the use of Dense Codes as variable to variable codes. This is a relatively new research field (Savari and Szpankowski, 2002), based on the use of source symbols of variable length, which are in turn encoded with variable-length codewords.

We have also explored the use of Dense Codes in dynamic scenarios (Brisaboa et al., 2004; Fariña, 2005), where we have presented adaptive versions of both word-based byte-oriented Huffman and End-Tagged Dense Codes. These approaches do not permit efficient direct search on the compressed text since the assignment word/codeword varies frequently as the model changes during the compression.

The simplicity of Dense Codes, which results in much faster encoding time, has little impact in the overall compression time of semistatic compressors, as encoding is a tiny part of the whole process. In adaptive compression, however, updating the model upon each new word is a heavy task that must be carried out for every new text word. As a result, the overall compression time is much better with Dense Codes than with adaptive Huffman codes, whereas the compression ratio is almost the same.

Recently, we have managed to modify dynamic ETDC to allow direct search in the compressed text (Brisaboa et al., 2005b). This adaptive technique gives more stability to the codewords assigned to the original symbols as the compression progresses, exploiting the fact that Dense Codes depend only on the frequency rank and not the actual frequency of words. Basically, the technique changes the model only when it is absolutely necessary to maintain the original compression ratio, and it breaks the usual compressor-decompressor symmetry present in most adaptive compression schemes. This permits much lighter decompressors and searchers, which is very valuable in mobile applications.

Finally, Dense Codes have proved to be a valuable analytic tool to bound the redundancy of D-ary Huffman codes on different families of distributions (Navarro and Brisaboa, 2006).

Footnote 1

Footnote 2

Footnote 3

Footnote 4

Footnote 5

Footnote 6

Footnote 7

Footnote 8

Footnote 9

Footnote 10

Footnote 11

Footnote 12