Advertisement

Scientific Data Lossless Compression Using Fast Neural Network

  • Jun-Lin Zhou
  • Yan Fu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)

Abstract

Scientific computing generates huge loads of data from complex simulations, usually takes several TB, general compression methods can not have good performance on these data. Neural networks have the potential to extend data compression algorithms beyond the character level(n-gram model) currently in use, but have usually been avoided because they are too slow to be practical. We present a lossless compression method using fast neural network based on Maximum Entropy and arithmetic coder to succeed in the job. The compressor is a bit-level predictive arithmetic encoder using a 2 layer fast neural network to predict the probability distribution. In the training phase, an improved adaptive variable learning rate is optimized for fast convergence training. The proposed compressor produces better compression than popular compressors(bzip, zzip, lzo, ucl and dflate) on the lared-p data set, also is competitive in time and space for practical application.

Keywords

Neural Network Compression Ratio Maximum Entropy Data Compression Lossless Compression 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Schmidhuber, J., Stefan, H.: Sequential Neural Text Compression. IEEE Trans. Neural Networks 7(1), 126–142 (1996)CrossRefGoogle Scholar
  2. 2.
    Berger, A., Pietra, S.D.: A Maximum Entropy Approach to Natural Language Processing. Computation Linguistics 22(1), 39–71 (1996)Google Scholar
  3. 3.
    Chatterjee, A., Nait-Ali, A., Siarry, P.: An Input-delay Neural-network-based Approach for Piecewise ECG Signal Compression. IEEE Trans. Biomedical Engineering 52(5), 945–947 (2005)CrossRefGoogle Scholar
  4. 4.
    Chung, C., Leung, S.H.: A Backpropagation Algorithm with Adaptive Learning Rate and Momentum Coefficient. IEEE Trans. Neural Networks 15(6), 1411–1423 (2004)CrossRefGoogle Scholar
  5. 5.
    Petalas, Y.G., Vrahatis, M.N.: Parallel Tangent Methods with Variable Stepsize. In: IEEE International Conference on Neural Networks Proceedings, vol. 2, pp. 1063–1066. IEEE Inc., Piscataway (2004)Google Scholar
  6. 6.
    Yang, G.W., Tu, X.Y., Pang, J.: Research of Lossless Data Compression Based on a Virtual Information Source. Tien Tzu Hsueh Pao 31(5), 728–731 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jun-Lin Zhou
    • 1
  • Yan Fu
    • 1
  1. 1.School of Computer Science and EngineeringUniversity of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations