Advertisement

Why Is Auto-Encoding Difficult for Genetic Programming?

  • James McDermottEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11451)

Abstract

Unsupervised learning is an important component in many recent successes in machine learning. The autoencoder neural network is one of the most prominent approaches to unsupervised learning. Here, we use the genetic programming paradigm to create autoencoders and find that the task is difficult for genetic programming, even on small datasets which are easy for neural networks. We investigate which aspects of the autoencoding task are difficult for genetic programming.

Keywords

Unsupervised learning Autoencoder Linear genetic programming Symbolic regression 

Notes

Acknowledgements

This work was carried out while JMcD was at University College Dublin. Thanks to members of the University College Dublin Natural Computing Research and Applications group, in particular Takfarinas Saber and Stefano Mauceri, for useful discussions. Thanks to Van Loi Cao for data-processing code and for discussion. Thanks also to the anonymous reviewers.

References

  1. 1.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5907–5915 (2017). https://arxiv.org/abs/1612.03242
  3. 3.
    Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM (2008)Google Scholar
  4. 4.
    Yao, X.: A review of evolutionary artificial neural networks. Int. J. Intell. Syst. 8(4), 539–567 (1993)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002)CrossRefGoogle Scholar
  6. 6.
    Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 (2016)
  7. 7.
    Such, F.P., Madhavan, V., Conti, E., Lehman, J., Stanley, K.O., Clune, J.: Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567 (2017)
  8. 8.
    Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)
  9. 9.
    Orzechowski, P., La Cava, W., Moore, J.H.: Where are we now? A large benchmark study of recent symbolic regression methods. In: Proceedings of GECCO (2018). arXiv preprint arXiv:1804.09331
  10. 10.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. California University San Diego La Jolla Institute for Cognitive Science, Technical report (1985)Google Scholar
  11. 11.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., Frey, B.: Adversarial autoencoders. arXiv preprint arXiv:1511.05644 (2015)
  13. 13.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  14. 14.
    McConaghy, T.: FFX: fast, scalable, deterministic symbolic regression technology. In: Riolo, R., Vladislavleva, E., Moore, J. (eds.) Genetic Programming Theory and Practice IX, pp. 235–260. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-1-4614-1770-5_13CrossRefGoogle Scholar
  15. 15.
    Trujillo, L., et al.: Local search is underused in genetic programming. In: Riolo, R., Worzel, B., Goldman, B., Tozier, B. (eds.) Genetic Programming Theory and Practice XIV. GEC, pp. 119–137. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-97088-2_8CrossRefGoogle Scholar
  16. 16.
    Ellis, K., Solar-Lezama, A., Tenenbaum, J.: Unsupervised learning by program synthesis. In: Advances in Neural Information Processing Systems, pp. 973–981 (2015)Google Scholar
  17. 17.
    Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)zbMATHGoogle Scholar
  18. 18.
    Miller, J.F., Thomson, P.: Cartesian genetic programming. In: Poli, R., Banzhaf, W., Langdon, W.B., Miller, J., Nordin, P., Fogarty, T.C. (eds.) EuroGP 2000. LNCS, vol. 1802, pp. 121–132. Springer, Heidelberg (2000).  https://doi.org/10.1007/978-3-540-46239-2_9CrossRefGoogle Scholar
  19. 19.
    Jackson, D.: A new, node-focused model for genetic programming. In: Moraglio, A., Silva, S., Krawiec, K., Machado, P., Cotta, C. (eds.) EuroGP 2012. LNCS, vol. 7244, pp. 49–60. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29139-5_5CrossRefGoogle Scholar
  20. 20.
    Schmidt, M., Lipson, H.: Distilling free-form natural laws from experimental data. Science 324(5923), 81–85 (2009). http://www.sciencemag.org/content/324/5923/81.abstract
  21. 21.
    Brameier, M., Banzhaf, W.: Linear genetic programming. Springer, Heidelberg (2006).  https://doi.org/10.1007/978-0-387-31030-5CrossRefzbMATHGoogle Scholar
  22. 22.
    Potter, M.A., Jong, K.A.D.: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evol. Comput. 8(1), 1–29 (2000)CrossRefGoogle Scholar
  23. 23.
    Ni, J., Drieberg, R.H., Rockett, P.I.: The use of an analytic quotient operator in genetic programming. Trans. Evol. Comput. 17(1), 146–152 (2013)CrossRefGoogle Scholar
  24. 24.
    Nicolau, M., Agapitos, A.: On the effect of function set to the generalisation of symbolic regression models. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 272–273. ACM (2018)Google Scholar
  25. 25.
    Bykov, Y., Petrovic, S.: A step counting hill climbing algorithm applied to university examination timetabling. J. Sched. 19(4), 479–492 (2016)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Burke, E.K., Bykov, Y.: The late acceptance hill-climbing heuristic. Eur. J. Oper. Res. 258(1), 70–78 (2017)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Cao, V.L., Nicolau, M., McDermott, J.: Late-acceptance and step-counting hill-climbing GP for anomaly detection. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 221–222. ACM (2017)Google Scholar
  28. 28.
    O’Reilly, U.-M., Oppacher, F.: Program search with a hierarchical variable length representation: genetic programming, simulated annealing and hill climbing. In: Davidor, Y., Schwefel, H.-P., Männer, R. (eds.) PPSN 1994. LNCS, vol. 866, pp. 397–406. Springer, Heidelberg (1994).  https://doi.org/10.1007/3-540-58484-6_283. http://www.springer.de/cgi-bin/searchbook.pl?isbn=3-540-58484-6
  29. 29.
    Chellapilla, K.: Evolutionary programming with tree mutations: evolving computer programs without crossover. In: Genetic Programming, Stanford, CA, USA, pp. 431–438 (1997)Google Scholar
  30. 30.
    Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml
  31. 31.
    Cao, V.L., Nicolau, M., McDermott, J.: Learning neural representations for network anomaly detection. IEEE Trans. Cybern. 99, 1–14 (2018). Early accessCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.National University of IrelandGalwayIreland

Personalised recommendations