Evolutionary Intelligence

, Volume 5, Issue 3, pp 171–187 | Cite as

Kernel representations for evolving continuous functions

  • Tobias Glasmachers
  • Jan Koutník
  • Jürgen Schmidhuber
Special Issue


To parameterize continuous functions for evolutionary learning, we use kernel expansions in nested sequences of function spaces of growing complexity. This approach is particularly powerful when dealing with non-convex constraints and discontinuous objective functions. Kernel methods offer a number of beneficial properties for parameterizing continuous functions, such as smoothness and locality, which make them attractive as a basis for mutation operators. Beyond such practical considerations, kernel methods make heavy use of inner products in function space and offer a well established regularization framework. We show how evolutionary computation can profit from these properties. Searching function spaces of iteratively increasing complexity allows the solution to evolve from a simple first guess to a complex and highly refined function. At transition points where the evolution strategy is confronted with the next level of functional complexity, the kernel framework can be used to project the search distribution into the extended search space. The feasibility of the method is demonstrated on challenging trajectory planning problems where redundant robots have to avoid obstacles.


Trajectory planning Collision avoidance Robotics Kernels Nested function spaces Evolution strategy 



This work was funded through the 7th Framework Programme of the EU under grant number 231576 (STIFF project) and SNF grant 200020-125038/1.


  1. 1.
    Alfaro T, Rojas MCR (2005) An on-the-fly evolutionary algorithm for robot motion planning. In: ICES, pp 119–130Google Scholar
  2. 2.
    Barraquand J, Latombe JC (1991) Robot motion planning: a distributed representation approach. Int J Robot Res 10(6):628–649CrossRefGoogle Scholar
  3. 3.
    Beyer HG, Schwefel HP (2002) “Evolution strategies”—a comprehensive introduction. Nat Comput 1:3–52MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on computational learning theory. COLT ’92, ACM, New York, NY, USA, pp 144–152Google Scholar
  5. 5.
    Conkur ES, Buckingham R (1997) Manoeuvring highly redundant manipulators. Robotica 15:435–447CrossRefGoogle Scholar
  6. 6.
    Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297zbMATHGoogle Scholar
  7. 7.
    Denker A, Atherton D (1994) No-overshoot control of robotic manipulators in the presence of obstacles. J Robot Syst 11(7):665–678CrossRefGoogle Scholar
  8. 8.
    Eiben AE, Smith JE (2003) Introduction to evolutionary computing. Springer, BerlinzbMATHGoogle Scholar
  9. 9.
    Floreano D, Mitri S, Perez-Uribe A, Keller L (2008) Evolution of altruistic robots. In: Proceedings of the WCCI 2008, vol 5050. Springer, Berlin, pp 232–248Google Scholar
  10. 10.
    Glasmachers T, Schaul T, Schmidhuber J (2010) A natural evolution strategy for multi-objective optimization. In: Parallel problem solving from nature (PPSN)Google Scholar
  11. 11.
    Glasmachers T, Schaul T, Sun Y, Wierstra D, Schmidhuber J (2010) Exponential natural evolution strategies. In: Genetic and evolutionary computation conference (GECCO), Portland, ORGoogle Scholar
  12. 12.
    Gomez F, Schmidhuber J, Miikkulainen R (2006) Efficient non-linear control through neuroevolution. In: Fürnkranz J, Scheffer T, Spiliopoulou M (eds) Proceeding of the European conference on machine learning, No. 4212 in LNAI, Springer, pp 654–662Google Scholar
  13. 13.
    Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195CrossRefGoogle Scholar
  14. 14.
    Harding S, Miller JF (2005) Evolution of robot controller using Cartesian genetic programming. Genetic programming, pp 62–73Google Scholar
  15. 15.
    Hayashi A (1994) Geometrical motion planning for highly redundant manipulators using a continuous model. PhD thesis, University of Texas AustinGoogle Scholar
  16. 16.
    Iossifidis I, Schöner G (2006) Dynamical systems approach for the autonomous avoidance of obstacles and joint-limits for an redundant robot arm. In: IEEE/RSJ international conference on intelligent robots and systems, pp 580–585Google Scholar
  17. 17.
    Kavraki L, Svestka P, Latombe JC, Overmars M (1996) Probabilistic roadmaps for path planning in high-dimensional configuration spaces. In: IEEE international conference on robotics and automation, pp. 566–580Google Scholar
  18. 18.
    Khatib O (1986) Real-time obstacle avoidance for manipulators and mobile robots. Int J Robot Res 5(1):90–98MathSciNetCrossRefGoogle Scholar
  19. 19.
    Koutník J, Gomez FJ, Schmidhuber J (2010) Evolving neural networks in compressed weight space. In: GECCO, pp 619–626Google Scholar
  20. 20.
    Latombe JC (1991) Robot motion planning. Kluwer, NorwellCrossRefGoogle Scholar
  21. 21.
    Lee JD, Wang BL (1988) Optimal control of a flexible robot arm. Comput Struct 29(3):459–467MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Mitrovic D, Klanke S, Vijayakumar S (2010) Adaptive optimal feedback control with learned internal dynamics models. In: Sigaud O, Peters J (eds) From motor learning to interaction learning in robots. Springer, Berlin, pp 65–84Google Scholar
  23. 23.
    Nolfi S, Marocco D (2001) Evolving robots able to integrate sensory-motor information over time. Theory Biosci 120:287–310Google Scholar
  24. 24.
    Rechenberg I, Eigen M (1973) Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann-Holzboog, StuttgartGoogle Scholar
  25. 25.
    Schaul T, Glasmachers T, Schmidhuber J (2011) High dimensions and heavy tails for natural evolution strategies. In: Proceedings of the genetic and evolutionary computation conference (GECCO)Google Scholar
  26. 26.
    Scholkopf B, Smola A, Muller KR (1998) Nonlinear component analysis as a Kernel eigenvalue problem. Neural Comput 10(5):1299–1319CrossRefGoogle Scholar
  27. 27.
    Sun Y, Wierstra D, Schaul T, Schmidhuber J (2009) Efficient natural evolution strategies. In: Genetic and evolutionary computation conference (GECCO)Google Scholar
  28. 28.
    Sun Y, Wierstra D, Schaul T, Schmidhuber J (2009) Stochastic search using the natural gradient. In: International conference on machine learning (ICML)Google Scholar
  29. 29.
    Vapnik V (1997) The support vector method. In: ICANN, pp 263–271Google Scholar
  30. 30.
    Vapnik VN (1998) Statistical learning theory. Wiley-Interscience, HobokenGoogle Scholar
  31. 31.
    Wierstra D, Schaul T, Peters J, Schmidhuber J (2008) Natural evolution strategies. In: Proceedings of the congress on evolutionary computation (CEC08), Hongkong. IEEE PressGoogle Scholar
  32. 32.
    Woolley BG, Stanley KO (2010) Evolving a single scalable controller for an octopus arm with a variable number of segments. In: PPSN (2), pp 270–279Google Scholar
  33. 33.
    Yekutieli Y, Sagiv-Zohar R, Aharonov R, Engel Y, Hochner B, Flash T (2005) A dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement. J Neurophysiol 94(2):1443–1458CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  • Tobias Glasmachers
    • 1
  • Jan Koutník
    • 2
  • Jürgen Schmidhuber
    • 2
  1. 1.Institute for Neural ComputationRuhr-University Bochum Universitätsstr. 150BochumGermany
  2. 2.IDSIA, University of Lugano and SUPSIManno-LuganoSwitzerland

Personalised recommendations