Skip to main content
Log in

Adaptation of General Concepts of Software Testing to Neural Networks

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

The problem of testing and debugging learning neural network systems is discussed. Differences of these systems from program implementations of algorithms from the point of view of testing are noted. Requirements to the testing systems are identified. Specific features of various neural network models from the point of view of selection of the testing technique and determination of tested parameters are analyzed. It is discussed how to get rid of the noted drawbacks of the systems under study. The discussion is illustrated by an example.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Function а(х) is called a sigmoid function if is continuously differentiable, monotonically increasing, and is bounded from below and above.

  2. In binary networks, neuron values are sometimes defined to be (1, 0). In this case, the activation function is the Heaviside function \(\mathcal{Q}(z) = \left\{ \begin{gathered} 1,\quad z \geqslant 0, \hfill \\ 0,\quad z < 0. \hfill \\ \end{gathered} \right.\) Transition from these values to the standard one (1, –1) is trivial: \(y = 2x - 1\).

REFERENCES

  1. Ciresan, D., Meier, U., Masci, J., and Schmidhuber, J., Multi-column deep neural network for traffic sign classification, in Neural Networks. Selected Papers from IJCNN, 2011, vol. 32, pp. 333–338.

    Google Scholar 

  2. CES 2015: Nvidia Demos a Car Computer Trained with “Deep Learning”, A commercial device uses powerful image and information processing to let cars interpret camera views, David Talbot, January 6, 2015, MIT Technology Review; Schmidt.

  3. Roth, S., Shrinkage fields for effective image restoration, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014.

  4. Deng, L. and Yu, D., Deep learning: Methods and applications, Found. Trends Signal Process., 2014, vol. 7, nos. 3–4, pp. 1–19.

    Article  MathSciNet  Google Scholar 

  5. Gao, Jianfeng, He, Xiaodong, Yih, Scott Wen-tau, and Deng, Li, Learning continuous phrase representations for translation modeling, 2014, Microsoft Research, www.aclweb.org/anthology/P14-1066.

  6. Chicco, D., Sadowski, P., and Baldi, P., Deep autoencoder neural networks for gene ontology annotation predictions, Proc. of the 5th ACM Conf. on Bioinformatics, Computational Biology, and Health Informatics, pp. 533–540.

  7. Sathyanarayana, A., Joty, S., Fernandez-Luque, L., Ofli, F., Srivastava, J., Elmagarmid, A., Arora, T., and Taheri, S., Sleep quality prediction from wearable data using deep learning, JMIR Mhealth Uhealth, 2016, vol. 4, no. 4, p. e125.

    Article  Google Scholar 

  8. Movahedi, F., Coyle, J.L., and Sejdic, E., Deep belief networks for electroencephalography: A review of recent contributions and future outlooks, IEEE J. Biomed Health Inform, 2018, vol. 3, pp. 642–652.

    Article  Google Scholar 

  9. Choi, E., Schuetz, A., and Stewart, W.F., Sun, Jimeng, Using recurrent neural network models for early detection of heart failure onset, J. Am. Med. Inform. Assoc., 2016, doi: . doi 10.1093/jamia/ocw112

  10. Elkahky, A.M., Song, Y., and He, X., A multi-view deep learning approach for cross domain user modeling in recommendation systems, Microsoft Research. http: //sonyis.me/paperpdf/frp1159-songA-www-2015.pdf.

  11. Yamins, D.L.K. and DiCarlo, J.J., Using goal-driven deep learning models to understand sensory cortex, Nat. Neurosci., 2016, vol. 19, no. 3, pp. 356–365.

    Article  Google Scholar 

  12. Zorzi, M. and Testolin, A., An emergentist perspective on the origin of number sense, Phil. Trans. R. Soc. B, 2018, vol. 373, no. 1740.

  13. Morel, D., Singh, C., and Levy, W.B., Linearization of excitatory synaptic integration at no extra cost, J. Comput. Neurosci., 2018, vol. 44, no. 2, pp. 173–188.

    Article  MathSciNet  Google Scholar 

  14. IEEE 829. Standard for Software Test Documentation. IEEE 1008. Standard for Software Unit Testing. https: //www.twirpx.com/file/1615980/.

  15. ISO/MЭK 12119. Program packages. Requirements to quality and testing. http://docs.cntd.ru/document/1200025075.

  16. GOST R 56920-2016, GOST R 56921-2016, GOST R 56922-2016. https://allgosts.ru.

  17. ISO/IEC 29119-2013 1-5. Software testing. http:// files.stroyinf.ru/Data2/1/4293754/4293754866.pdf.

  18. GOST R 12207-2010, ISO/IEC 12207:2008. http:// docs.cntd.ru/document/1200082859

  19. Beizer, B., Black-Box Testing: Techniques for Functional Testing of Software and Systems, Wiley, 1995.

    Google Scholar 

  20. Dusting, E., Rashka, J., and Paul, J., Automated Software Testing. Introduction, Management and Performance, Addison Wesley, 1999.

    Google Scholar 

  21. Louise Tamres, Introducing Software Testing, Addison Wesley, 2002.

    Google Scholar 

  22. Kuliamin, V.V., Petrenko, A.K., Kossatchev, A.S., and Burdonov, I.B., The UniTesK approach to designing test suites, Program. Comput. Software, 2003, no. 6, pp. 310–322.

  23. Burdonov, I.B., Kossatchev, A.S., and Kuliamin, V.V., Teoriya sootvetstviya dlya sistem s blokirovkami i razrusheniem (Correspondence Theory for Systems with Blockings and Destruction), Moscow: Nauka, 2008.

  24. Ivannikov, V.P., Petrenko, A.K., Kuliamin, V.V., and Maksimov, A.V., Experience of using UniTESK as a mirror of model-based testing technology development, Tr. Inst. Sistemnogo Program. Ross. Akad. Nauk, 2013, vol. 24, pp. 207–218.

    Google Scholar 

  25. Kuliamin, V.V. and Petrenko, A.K., Evolution of the UniTESK test development technology. Program. Comput. Software, 2014, vol. 24, no. 5, pp. 296—304.

    Article  Google Scholar 

  26. Yenigun, H., Kushik, N., Lopez, J., Yevtushenko, N., and Cavalli, A.R., Decreasing the complexity of deriving test suites against nondeterministic finite state machines, Proc. of East-West Design \(\& \) Test Symposium (EWDTS), 2017, IEEE Xplore, pp. 1–4.

  27. Beck, K., Test-Driven Development: By Example, Addison-Wesley, 2003.

    Google Scholar 

  28. Astels, D., Test-Driven Development. A Practical Guide, Prentice Hall, 2003.

    Google Scholar 

  29. Rosenblatt, F., Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Washington DC: Spartan Books, 1961.

    Book  MATH  Google Scholar 

  30. Rumelhart, D.E. Hinton, G.E., and Williams, R.J., Learning Internal Representations by Error Propagation, 1986.

    Google Scholar 

  31. Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1: Foundation, Rumelhart, D.E. and McClelland, J.L., Eds., MIT Press, 1986.

  32. Hopfield, J.J., Neural networks and physical systems with emergent collective computational abilities, Proc. Nat. Acad. Sci. USA, 1982, vol. 79 no. 8, pp. 2554–2558.

    Article  MathSciNet  MATH  Google Scholar 

  33. Ackley, D.H., Hinton, G.E., and Sejnowski, T.J., A learning algorithm for Boltzmann machines, Cogn. Sci., 1985, vol. 9, no. 1, pp. 147–169.

    Article  Google Scholar 

  34. Kohonen, T., Self-organized formation of topologically correct feature maps, Biol. Cybernet., 1982, vol. 43, no. 1, pp. 59—69.

    Article  MathSciNet  MATH  Google Scholar 

  35. Ivakhnenko, A.G. and Lapa, V.G., Kiberneticheskie predskazyvayushchie ustroistva (Cybernetic Forecasting Devices), Kiev: Naukova Dumka, 1965 (in Russian).

  36. Ivakhnenko, A.G. and Lapa, V.G., Cybernetics and Forecasting Techniques, New York: Elsevier, 1967.

    Google Scholar 

  37. Fukushima, K., Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position, Biol. Cybern., 1980, vol. 36, pp. 193–202.

    Article  MATH  Google Scholar 

  38. Yann, Lecun., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., and Jackel, L.D., Backpropagation applied to handwritten zip code recognition, Neural Comput., 1989, vol. 1, no. 4, pp. 541—551.

    Article  Google Scholar 

  39. Hinton, G.E., Osindero, S., and Teh, Y.W., A fast learning algorithm for deep belief nets, Neural Comput., vol. 18, pp. 1527–1554. http://dx.doi.org/ doi 10.1162/ neco.2006.18.7.1527

  40. Hinton, G.E., Learning multiple layers of representation, Trends Cogn. Sci., 2007, vol. 11, no. 10, pp. 428–434.

    Article  Google Scholar 

  41. Rumelhart, D.E., Hinton, G.E., and Williams, R.J., Learning internal representations by backpropagating errors, Nature, 1986, vol. 323, pp. 533–536.

    Article  MATH  Google Scholar 

  42. Floreen, P., Worst-case convergence times for Hopfield memories, IEEE Trans. Neural Networks, 1991, vol. 2, no. 5, pp. 533–535.

    Article  Google Scholar 

  43. Floreen, P., The convergence of Hamming memory networks, IEEE Trans. Neural Networks, 1991, vol. 2, no. 4, pp. 449–457.

    Article  Google Scholar 

  44. Utgoff, P.E. and Stracuzzi, D.J., Many-layered learning, Neural Comput., 2002, vol. 14, pp. 2497–2529.

    Article  MATH  Google Scholar 

  45. Jeffrey, L., Elman, J.L., Bates, E.A., Johnson, M.H., Karmiloff-Smith, A., Parisi, D., and Plunkett, K., Rethinking Innateness: A Connectionist Perspective on Development, Cambridge: MIT Press, 1996.

    Google Scholar 

  46. Shrager, J. and Johnson, M.H., Dynamic plasticity influences the emergence of function in a simple cortical array, Neural Networks, 1996, vol. 9, no. 7, pp. 1119–1129.

    Article  Google Scholar 

  47. Quartz, S.R. and Sejnowski, T.J., The neural basis of cognitive development: A constructivist manifesto, Behav. Brain Sci., 1997, vol. 20, no. 4, pp. 537–556.

    Google Scholar 

  48. Kaiming, He, Xiangyu, Zhang., Shaoqing, Ren, and Jian, Sun, Identity mappings in deep residual networks, Proc. of Europ. Conf. on Computer Vision, 2016, pp. 630–645.

  49. Ivakhnenko, A., Polynomial theory of complex systems, IEEE Trans. Systems, Man Cybernet., 1971, vol. 4, no. 1, pp. 364–378.

    Article  MathSciNet  Google Scholar 

  50. Bengio, Y., Boulanger-Lewandowski, N., and Pascanu, R., Advances in optimizing recurrent networks, 2013 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 2013, pp. 8624–8628. arXiv:1212.0901v2 [cs.LG]

  51. Dahl, G., Sainath, T., and Hinton, G., Improving DNNs for LVCSR using rectified linear units and dropout, Proc. of Int. Conf. on Acoustics, Speech and Signal Processing, 2011, pp. 8609–8613.

  52. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R.R., Improving neural networks by preventing co-adaptation of feature detectors, 2012, arXiv:1207.0580.

  53. Hinton, G.E. and Salakhutdinov, R.R., Reducing the dimensionality of data with neural networks, Science, 2006, vol. 313, no. 5786, pp. 504–507.

    Article  MathSciNet  MATH  Google Scholar 

  54. Kuliamin, V.V., Tekhnologii programmirovaniya. Komponentnyi podkhod (Programming Technologies: Component Approach), Moscow: BINOM, 2007 (in Russian).

  55. Floreen, P., Orponen, P., Attraction radii in binary Hopfield nets are hard to compute, Neural Comput., 1993, vol. 5, pp. 812–821.

    Article  Google Scholar 

Download references

ACKNOWLEDGMENTS

This work was supported by the Russian Foundation for Basic Research, project no. 18-07-00697а.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yu. L. Karpov, L. E. Karpov or Yu. G. Smetanin.

Additional information

Translated by A. Pesterev

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karpov, Y.L., Karpov, L.E. & Smetanin, Y.G. Adaptation of General Concepts of Software Testing to Neural Networks. Program Comput Soft 44, 324–334 (2018). https://doi.org/10.1134/S0361768818050031

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0361768818050031

Keywords

Navigation