Skip to main content

From Continuous Behaviour to Discrete Knowledge

  • Conference paper
  • First Online:
Artificial Neural Nets Problem Solving Methods (IWANN 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2687))

Included in the following conference series:

  • 578 Accesses


Neural networks have proven to be very powerful techniques for solving a wide range of tasks. However, the learned concepts are unreadable for humans. Some works try to obtain symbolic models from the networks, once these networks have been trained, allowing to understand the model by means of decision trees or rules that are closer to human understanding. The main problem of this approach is that neural networks output a continuous range of values, so even though a symbolic technique could be used to work with continuous classes, this output would still be hard to understand for humans. In this work, we present a system that is able to model a neural network behaviour by discretizing its outputs with a vector quantization approach, allowing to apply the symbolic method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others


  1. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, pp. 533–536, 1986.

    Article  Google Scholar 

  2. Leslie P. Kaelbling, Michael L. Littman, and Andrew W. Moore, “Reinforcement learning: A survey,” Int. J. of Artificial Intelligence Research, pp. 237–285, 1996.

    Google Scholar 

  3. Antonio Berlanga, Araceli Sanchis, Pedro Isasi, and José M. Molina, “A general coevolution method to generalize autonomous robot navigation behavior,” in Proceedings of the Congress on Evolutionary Computation, La Jolla, San Diego (CA) USA, July 2000, pp. 769–776, IEEE Press.

    Google Scholar 

  4. Jude W. Shavlik and Geoffrey G. Towell, Machine Learning. A Multistrategy Approach., vol. IV, chapter Refining Symbolic Knowledge using Neural Networks, pp. 405–429, Morgan Kaufmann, 1994.

    Google Scholar 

  5. Ricardo Aler, Daniel Borrajo, Inés Galván,, and Agapito Ledezma, “Learning models of other agents,” in Proceedings of the Agents-00/ECML-00 Workshop on Learning Agents,, Barcelona, Spain, June 2000, pp. 1–5.

    Google Scholar 

  6. J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, 1986.

    Google Scholar 

  7. S. P. Lloyd, “Least squares quantization in pcm,” Unpublished Bell Laboratories Technical Note. Portions presented at the Institute of Mathematical Statistics Meeting Atlantic City, New Jersey, September 1957. Published in the March 1982 special issue on quantization of the IEEE Transactions on Information Theory, 1957.

    Google Scholar 

  8. J. Ross Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, San Mateo, CA, 1993.

    Google Scholar 

  9. L. Breiman, J.H. Friedman, K.A. Olshen, and C.J. Stone, Classification and Regression Tress, Wadsworth & Brooks, Monterey, CA (USA), 1984.

    MATH  Google Scholar 

  10. J. Ross Quinlan, “Combining instance-based and model-based learning,” in Proceedings of the Tenth International Conference on Machine Learning, Amherst, MA, June 1993, pp. 236–243, Morgan Kaufmann.

    Google Scholar 

  11. Antonio Berlanga Agapito Ledezma and Ricardo Aler, “Extracting knowledge from reactive robot behaviour,” in Proceedings of the Agents-01/Workshop on Learning Agents,, Montreal, Canada, 2001, pp. 7–12.

    Google Scholar 

  12. Allen Gersho and Robert M. Gray, Vector Quantization and Signal Compression, Kluwer Academic Publishers, 1992.

    Google Scholar 

  13. Fernando Fernández and Daniel Borrajo, “VQQL. Applying vector quantization to reinforcement learning,” in RoboCup-99: Robot Soccer World Cup III, number 1856 in Lecture Notes in Artificial Intelligence, pp. 292–303. Springer Verlag, 2000.

    Google Scholar 

  14. V. Braitenberg, Vehicles: experiments on synthetic psychology, MIT Press, Massachusets, 1984.

    Google Scholar 

  15. J. Ross Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, 1993.

    Google Scholar 

  16. L. Sommaruga, I. Merino, V. Matellán, and J.M. Molina, “A distributed simulator for intelligent autonomous robots,” in In Proccedings of Fourth International Symposium on Intelligent Robotic Systems, 1996, pp. 393–399.

    Google Scholar 

Download references

Author information

Authors and Affiliations


Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ledezma, A., Fernández, F., Aler, R. (2003). From Continuous Behaviour to Discrete Knowledge. In: Mira, J., Álvarez, J.R. (eds) Artificial Neural Nets Problem Solving Methods. IWANN 2003. Lecture Notes in Computer Science, vol 2687. Springer, Berlin, Heidelberg.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40211-4

  • Online ISBN: 978-3-540-44869-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics