Advertisement

Compressing Semistructured Text Databases

  • Joaquín Adiego
  • Gonzalo Navarro
  • Pablo de la Fuente
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2633)

Abstract

We describe a compression model for semistructured documents, called Structural Contexts Model, which takes advantage of the context information usually implicit in the structure of the text. The idea is to use a separate semiadaptive model to compress the text that lies inside each different structure type (e.g., different XML tag). The intuition behind the idea is that the distribution of all the texts that belong to a given structure type should be similar, and different from that of other structure types. We test our idea using a word-based Huffman coding, which is the standard for compressing large natural language textual databases, and show that our compression method obtains significant improvements in compression ratios. We also analyze the possibility that storing separate models may not pay of if the distribution of different structure types is not different enough, and present a heuristic to merge models with the aim of minimizing the total size of the compressed database. This technique gives an additional improvement over the plain technique. The comparison against existing prototypes shows that our method is a competitive choice for compressed text databases.

Keywords

Text Compression Compression Model Semistructured Documents Text Databases 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    J. Bentley, D. Sleator, R. Tarjan, and V. Wei. A locally adaptive data compression scheme. Communications of the ACM, 29:320–330, 1986.zbMATHCrossRefMathSciNetGoogle Scholar
  2. [2]
    J. Cheney. Compressing XML with multiplexed hierarchical PPM models. In Proc. Data Compression Conference (DCC 2001), pages 163–, 2001.Google Scholar
  3. [3]
    D. Harman. Overviewof the Third Text REtrieval Conference. In Proc. Third Text REtrieval Conference (TREC-3), pages 1–19, 1995. NIST Special Publication 500-207.Google Scholar
  4. [4]
    H. S. Heaps. Information Retrieval — Computational and Theoretical Aspects. Academic Press, 1978.Google Scholar
  5. [5]
    D. A. Huffman. A method for the construction of minimum-redundancy codes. Proc. Inst. Radio Engineers, 40(9):1098–1101, 1952.Google Scholar
  6. [6]
    H. Liefke and D. Suciu. XMill: an efficient compressor for XML data. In Proc. ACM SIGMOD 2000, pages 153–164, 2000.Google Scholar
  7. [7]
    A. Moffat. Word-based text compression. Software — Practice and Experience, 19(2): 185–198, 1989.CrossRefGoogle Scholar
  8. [8]
    E. Silva de Moura, G. Navarro, N. Ziviani, and R. Baeza-Yates. Fast and flexible word searching on compressed text. ACM Transactions on Information Systems, 18(2):113–139, 2000.CrossRefGoogle Scholar
  9. [9]
    G. Navarro, E. Silva de Moura, M. Neubert, N. Ziviani, and R. Baeza-Yates. Adding compression to block addressing inverted indexes. Information Retrieval, 3(1):49–77, 2000.CrossRefGoogle Scholar
  10. [10]
    Ian H. Witten, Timothy C. Bell, John G. Cleary. Text Compression. Prentice Hall, Englewood Cliffs, N. J., 1990.Google Scholar
  11. [11]
    P. Tolani and J. R. Haritsa. XGRIND: A query-friendly XML compressor. In ICDE, 2002. http://citeseer.nj.nec.com/503319.html.
  12. [12]
    I. H. Witten, A. Moffat, and T. C. Bell. Managing Gigabytes. Morgan Kaufmann Publishers, Inc., second edition, 1999.Google Scholar
  13. [13]
    N. Ziviani, E. Moura, G. Navarro, and R. Baeza-Yates. Compression: A key for next-generation text retrieval systems. IEEE Computer, 33(11):37–44, November 2000.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Joaquín Adiego
    • 1
  • Gonzalo Navarro
    • 2
  • Pablo de la Fuente
    • 1
  1. 1.Departamento de InformáticaUniversidad de ValladolidValladolidEspaña
  2. 2.Departamento de Ciencias de la ComputaciónUniversidad de ChileSantiagoChile

Personalised recommendations