Advertisement

Analysis of Denoising Autoencoder Properties Through Misspelling Correction Task

  • Karol Draszawka
  • Julian Szymański
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10449)

Abstract

The paper analyzes some properties of denoising autoencoders using the problem of misspellings correction as an exemplary task. We evaluate the capacity of the network in its classical feed-forward form. We also propose a modification to the output layer of the net, which we called multi-softmax. Experiments show that the model trained with this output layer outperforms traditional network both in learning time and accuracy. We test the influence of the noise introduced to training data on the learning speed and generalization quality. The proposed approach of evaluating various properties of autoencoders using misspellings correction task serves as an open framework for further experiments, e.g. incorporating other neural network topologies into an autoencoder setting.

Keywords

Autoencoder Misspellings Autoassociative memory 

References

  1. 1.
    Fausett, L.V.: Fundamentals of Neural Networks. Prentice-Hall, Upper Saddle River (1994)Google Scholar
  2. 2.
    Kukich, K.: Techniques for automatically correcting words in text. ACM Comput. Surv. (CSUR) 24, 377–439 (1992)CrossRefGoogle Scholar
  3. 3.
    Baeza-Yates, G., Navarro, R.: Faster approximate string matching. Algorithmica 23, 127–158 (1999)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Szymański, J., Boiński, T.: Improvement of imperfect string matching based on asymmetric n-grams. In: Bǎdicǎ, C., Nguyen, N.T., Brezovan, M. (eds.) ICCCI 2013. LNCS (LNAI), vol. 8083, pp. 306–315. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40495-5_31CrossRefGoogle Scholar
  5. 5.
    Astrain, J.J., Garitagoitia, J.R., Villadangos, J.E., Fariña, F., Córdoba, A., de Mendıvil, J.G.: An imperfect string matching experience using deformed fuzzy automata. In: HIS, pp. 115–123 (2002)Google Scholar
  6. 6.
    Boguszewski, A., Szymański, J., Draszawka, K.: Towards increasing f-measure of approximate string matching in o(1) complexity. In: 2016 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 527–532 (2016)Google Scholar
  7. 7.
    Hantler, S.L., Laker, M.M., Lenchner, J., Milch, D.: Methods and apparatus for performing spelling corrections using one or more variant hash tables. US Patent App. 11/513,782 (2006)Google Scholar
  8. 8.
    Udupa, R., Kumar, S.: Hashing-based approaches to spelling correction of personal names. In: Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 1256–1265. Association for Computational Linguistics (2010)Google Scholar
  9. 9.
    Rubinstein, R.Y., Kroese, D.P.: The Cross-entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer Science & Business Media, Berlin (2013)Google Scholar
  10. 10.
    Jolliffe, I.: Principal Component Analysis. Wiley, Hoboken (2002)Google Scholar
  11. 11.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Bengio, Y., et al.: Learning deep architectures for AI. Found. Trends\({\textregistered }\) Mach. Learn. 2, 1–127 (2009)CrossRefGoogle Scholar
  13. 13.
    Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Lu, X., Tsao, Y., Matsuda, S., Hori, C.: Speech enhancement based on deep denoising autoencoder. In: Interspeech, pp. 436–440 (2013)Google Scholar
  15. 15.
    Agostinelli, F., Anderson, M.R., Lee, H.: Adaptive multi-column deep neural networks with application to robust image denoising. In: Advances in Neural Information Processing Systems, pp. 1493–1501 (2013)Google Scholar
  16. 16.
    Chollet, F.: Keras (2016). https://github.com/fchollet/keras
  17. 17.
    Theano Development Team: Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688 (2016)Google Scholar
  18. 18.
    Zeiler, M.D.: Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012)
  19. 19.
    Kohonen, T.: The self-organizing map. Neurocomputing 21, 1–6 (1998)CrossRefGoogle Scholar
  20. 20.
    Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
  21. 21.
    Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
  22. 22.
    Botvinick, M.M., Plaut, D.C.: Short-term memory for serial order: a recurrent neural network model. Psychol. Rev. 113, 201 (2006)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer Systems Architecture, Faculty of Electronic Telecommunications and InformaticsGdańsk University of TechnologyGdańskPoland

Personalised recommendations