Scientific Data Lossless Compression Using Fast Neural Network
Scientific computing generates huge loads of data from complex simulations, usually takes several TB, general compression methods can not have good performance on these data. Neural networks have the potential to extend data compression algorithms beyond the character level(n-gram model) currently in use, but have usually been avoided because they are too slow to be practical. We present a lossless compression method using fast neural network based on Maximum Entropy and arithmetic coder to succeed in the job. The compressor is a bit-level predictive arithmetic encoder using a 2 layer fast neural network to predict the probability distribution. In the training phase, an improved adaptive variable learning rate is optimized for fast convergence training. The proposed compressor produces better compression than popular compressors(bzip, zzip, lzo, ucl and dflate) on the lared-p data set, also is competitive in time and space for practical application.
KeywordsNeural Network Compression Ratio Maximum Entropy Data Compression Lossless Compression
Unable to display preview. Download preview PDF.
- 2.Berger, A., Pietra, S.D.: A Maximum Entropy Approach to Natural Language Processing. Computation Linguistics 22(1), 39–71 (1996)Google Scholar
- 5.Petalas, Y.G., Vrahatis, M.N.: Parallel Tangent Methods with Variable Stepsize. In: IEEE International Conference on Neural Networks Proceedings, vol. 2, pp. 1063–1066. IEEE Inc., Piscataway (2004)Google Scholar
- 6.Yang, G.W., Tu, X.Y., Pang, J.: Research of Lossless Data Compression Based on a Virtual Information Source. Tien Tzu Hsueh Pao 31(5), 728–731 (2003)Google Scholar