Skip to main content

Evolutionary Algorithms for Convolutional Neural Network Visualisation

  • Conference paper
  • First Online:
High Performance Computing (CARLA 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 979))

Included in the following conference series:

Abstract

Deep Learning is based on deep neural networks trained over huge sets of examples. It enabled computers to compete with—or even outperform—humans at many tasks, from playing Go to driving vehicules.

Still, it remains hard to understand how these networks actually operate. While an observer sees any individual local behaviour, he gets little insight about their global decision-making process.

However, there is a class of neural networks widely used for image processing, convolutional networks, where each layer contains features working in parallel. By their structure, these features keep some spatial information across a network’s layers. Visualisation of this spatial information at different locations in a network, notably on input data that maximise the activation of a given feature, can give insights on the way the model works.

This paper investigates the use of Evolutionary Algorithms to evolve such input images that maximise feature activation. Compared with some pre-existing approaches, ours seems currently computationally heavier but with a wider applicability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Not to be confused with a news channel.

  2. 2.

    The same reasoning could be applied with rectangular images, but it would add useless complexity to the argument. Moreover, actual NN often use square images.

  3. 3.

    For comparison, the number of particles in the visible universe (including photons, but excluding possible dark matter particles) is today considered to be less than \(10^{90}\).

  4. 4.

    We write “proportional to” and not “\(8\times \)” as the network actually works on floating-point numbers.

  5. 5.

    The question of the selection of the parents itself leads to mupliple possibilities: random drawings weighted by the fitness score, tournaments, etc.

  6. 6.

    Note that it may be by giving to each individual genome a score. However there may be cases where it is not possible. A way to rank the individuals (i.e. an order on them) would be sufficient though.

  7. 7.

    By the way, it would be possible to train a NN with EA, however this is extremely inefficient with regards to now-standard backpropagation algorithm used to this end.

  8. 8.

    Molke’s thought was actually subtler. See for instance [15].

  9. 9.

    Technically, of floats.

  10. 10.

    The actual feature activation may need to be adapted depending on where exactly we are taking it in a Caffe model as they separate the convolution from the activation stricto sensu and on the kind of activation the network uses. Here we are considering that \(AM_{foi}\) is the output of a relu activation layer.

  11. 11.

    A better way may be to normalise the \(\mathcal {A}\), for instance by dividing them by \(N_{foi}^2\).

  12. 12.

    For instance, each GPU of a Nvidia Tesla K80 has 12 GiB of RAM. This allows to process a batch of about 160 images/individuals in parallel over VGG (without its fully connected layers). The GPU on a workstation’s Nvidia Quadro K1200 (concurrently used by desktop applications) allows only batches of about 50.

  13. 13.

    We also ran multiple experiments on a same feature to see if we obtained different results.

  14. 14.

    Actually, stricto sensu, the couple {(P)RNG, EA rules} introduces a bias as it determines the trajectory of the evolution (considered as a dynamical system). However, as long as the PRNG has good statistical properties and the EA rules are not too constrained, this should not matter. This is very different from the bias in a specific direction introduced by a prior on the result for instance.

References

  1. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. CoRR abs/1311.2901 (2013)

    Google Scholar 

  2. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)

    Google Scholar 

  3. Dyson, F.: A meeting with Enrico Fermi. Nature 427, 297 (2004). https://doi.org/10.1038/427297a

    Article  Google Scholar 

  4. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. CoRR abs/1312.6034 (2013)

    Google Scholar 

  5. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.html

  6. Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: going deeper into neural networks. Google AI Blog, June 2015. https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

  7. Mordvintsev, A., Tyka, M., Olah, C.: DeepDream. GitHub code repository. https://github.com/google/deepdream

  8. Yosinski, J., Clune, J., Nguyen, A.M., Fuchs, T.J., Lipson, H.: Understanding neural networks through deep visualization. CoRR abs/1506.06579 (2015). http://yosinski.com/deepvis

  9. Koza, J.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)

    MATH  Google Scholar 

  10. Collet, P., Lutton, E., Schoenauer, M., Louchet, J.: Take it EASEA. In: Schoenauer, M., et al. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 891–901. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45356-3_87

    Chapter  Google Scholar 

  11. Maitre, O., Kruger, F., Pallamidessi, J., et al.: EASEA. Github code repository (2008–2016). https://github.com/EASEA/easea

  12. Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)

  13. Jia, Y., et al.: Caffe: a fast open framework for deep learning. GitHub code repository (2014–2018). https://github.com/BVLC/caffe/

  14. Misc.: Model zoo. GitHub. https://github.com/BVLC/caffe/wiki/Model-Zoo

  15. Hughes, D. (ed.): Moltke on the Art of War: Selected Writings. New edn. Presidio Press (1995). ISBN: 978-0891415756

    Google Scholar 

  16. Chollet, F., et al.: Keras. GitHub code repository (2015–2018). https://github.com/fchollet/keras

  17. Varrette, S., Bouvry, P., Cartiaux, H., Georgatos, F.: Management of an academic HPC cluster: the UL experience. In: Proceedings of the 2014 International Conference on High Performance Computing & Simulation (HPCS 2014), Bologna, Italy, pp. 959–967. IEEE, July 2014. https://hpc.uni.lu

  18. Simonyan, K., Zisserman, A.: 19-layer model from the arxiv paper: “very deep convolutional networks for large-scale image recognition”. Caffe Zoo/github gist (2014). https://gist.github.com/ksimonyan/3785162f95cd2d5fee77

  19. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. CoRR abs/1710.08864 (2017)

    Google Scholar 

  20. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436 (2015). https://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_047.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Nicolas Bernard or Franck Leprévost .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bernard, N., Leprévost, F. (2019). Evolutionary Algorithms for Convolutional Neural Network Visualisation. In: Meneses, E., Castro, H., Barrios Hernández, C., Ramos-Pollan, R. (eds) High Performance Computing. CARLA 2018. Communications in Computer and Information Science, vol 979. Springer, Cham. https://doi.org/10.1007/978-3-030-16205-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-16205-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16204-7

  • Online ISBN: 978-3-030-16205-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics