Abstract
The prediction of time series is an important task in finance, economy, object tracking, state estimation and robotics. Prediction is in general either based on a well-known mathematical description of the system behind the time series or learned from previously collected time series. In this work we introduce a novel approach to learn predictions of real world time series like object trajectories in robotics. In a sequence of experiments we evaluate whether a liquid state machine in combination with a supervised learning algorithm can be used to predict ball trajectories with input data coming from a video camera mounted on a robot participating in the RoboCup. The pre-processed video data is fed into a recurrent spiking neural network. Connections to some output neurons are trained by linear regression to predict the position of a ball in various time steps ahead. The main advantages of this approach are that due to the nonlinear projection of the input data to a high-dimensional space simple learning algorithms can be used, that the liquid state machine provides temporal memory capabilities and that this kind of computation appears biologically more plausible than conventional methods for prediction. Our results support the idea that learning with a liquid state machine is a generic powerful tool for prediction.
Similar content being viewed by others
References
Bear MF (2000) Neuroscience: exploring the brain. Williams and Wilkins, Baltimore, MA
Bishop C (1995) Neural networks for pattern recognition. Oxford, UK, Oxford University Press
Burgsteiner H (2005) Training networks of biological realistic spiking neurons for real-time robot control. In: Proceeding of the 9th International Conference on Engineering Applications of Neural Networks, pp 129–136
Burgsteiner H (2006) Imitation learning for real-time robot control. Int J Eng Appl Artif Intell
Elman J (1990) Finding structure in time. Cognitive Sci 14:179–211
Fernando C, Sojakka S (2003) Pattern recognition in a bucket: a real liquid brain. In: Advances in Artificial Life: 7th European Conference, Lecture Notes in Computer Science, vol. 2801, Springer, Berlin/Heidelberg, pp 588–597
Ferrein A, Fritz C, Lakemeyer G (2004) On-line decision-theoretic Golog for unpredictable domains. In: Proc. 4th Cognitive Robotics Workshop at ECAI 04
Fraser G, Steinbauer G, Wotawa F (2004) A modular architecture for a multi-purpose mobile robot. In: Innovations in Applied Artificial Intelligence, IEA/AIE, Lecture Notes in Artificial Intelligence, vol. 3029, Springer, Canada
Gupta A, Wang Y, Markram H (2000) Organizing principles for a diversity of gabaergic interneurons and synapses in the neocortex. Science 287:273–278
Hager G, Toyama K (1998) The XVision system: a general purpose substrate for portable real-time vision applications. Comput Vis Image Underst 69:23–37
Hopfield J (1982) Neural networks and physical systems with emergent collective computational abilities. In: Proceeding of the National Academy of Science 79:2554–2558
Jaeger H (2001) The echo state approach to analysing and training recurrent neural networks. Tech Rep 148, GMD
Jordan M, Wolpert D (1999) Computational motor control. In: Gazzaniga M (ed) The Cognitive Neurosciences. Cambridge, MA, MIT Press
Kohonen T (2001) Self-Organizing maps, 3 edn. Springer-Verlag
Legenstein R, Maass W (2005) What makes a dynamical system computationally powerful? In: Haykin S, Principe JC, Sejnowski T, McWhirter J (eds) New Directions in Statistical Signal Processing: From Systems to Brain. MIT Press.
Legenstein RA, Markram H, Maass W (2003) Input prediction and autonomous movement analysis in recurrent circuits of spiking neurons. Reviews in the Neurosciences (Special Issue on Neuroinformatics of Neural and Artificial Computation) 14(1–2):5–19
Maass W, Legenstein RA, Markram H (2002) A new approach towards vision suggested by biologically realistic neural microcircuit models. In: Buelthoff HH, Lee SW, Poggio TA, Wallraven C (eds) Biologically Motivated Computer Vision. In: Proc. of the Second International Workshop, BMCV 2002, Lecture Notes in Computer Science, vol 2525, Springer, Berlin, pp 282–293
Maass W, Natschlaeger T, Markram T (2002) Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput 14(11):2531– 2560
Markram H, Wang Y, Tsodyks M (1998) Differential signaling via the same axon of neocortical pyramidal neurons. In: Proceedings of the National Academy of Science 95(9):5323–5328
Maybeck PS (1990) The kalman filter: an introduction to concepts. In: Cox I, Wilfong G (eds) Autonomous robot vehicles, Springer-Verlag, pp 194–204
Natschläger T, Markram H, Maass W (2004) Computational models for generic cortical microcircuits. Computational Neuroscience: A Comprehensive Approach, pp 575–605
Pearlmutter B (1995) Gradient calculation for dynamic recurrent neural networks: a survey. IEEE Transactions on Neural Networks 6(5):1212–1228
Boyd S, Chua LO (1985) Fading memory and the problem of approximating nonlinear operators with volterra series. IEEE Trans. on Circuits and Systems, pp 1150–1161
Schölkopf B, Smola A, Müller KR (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10(5):1299–1319
Thomson A, West D, Wang Y, Bannister A (2002) Synaptic connections and small circuits involving excitatory and inhibitory neurons in layers 2–5 of adult rat and cat neocortex: triple intracellular recordings and biocytin labelling in vitro. Cerebral Cortex 12(9):936–953
Verma V, Simmons R, Gordon G, Thrun S (2004) Particle filters for fault diagnosis. IEEE Robotics and Automation Magazine 11(2):56–66
Author information
Authors and Affiliations
Corresponding author
Additional information
Harald Burgsteiner graduated from Salzburg Technical High School in the field of Electronics and Information Technology and went on to receive his M.Sc. and Ph.D. from the Graz University of Technology. He passed the exams with distinction and received his degree with honors. Mr. Burgsteiner worked as a research and teaching assistant at Prof. Maass' Institute for Theoretical Computer Science at the Graz University of Technology. His main working area was to explore new learning algorithms for neural networks on robots in real-world environments. He left the group in Spring 2003. Harald Burgsteiner is currently working at the Graz University of Applied Sciences as a Professor for Medical Informatics.
Mark Kröll is a Master student at the Institute for Theoretical Computer Science, Graz University of Technoloy. Currently he works at the Division of Knowledge Discovery, Know-Center Graz. His scientific interests are in the fields of Machine Learning and Kernel Methods.
Alexander Leopold received his B.Sc. degree in Telematics from Graz University of Technology in 2005 and is currently writing his master thesis at the Signal Processing and Speech Communication Laboratory. His research interests are computational intelligence and stochastic signal processing.
Gerald Steinbauer received a M.Sc. in Computer Engineering (Telematik) in 2001 from Graz University of Technology. He is currently researcher at the Institute for Software Technology at the Graz University of Technology and works on his Ph.D.-thesis focused on intelligent robust control of autonomous mobile robots. His research interests include autonomous mobile robots, sensor fusion, world modeling, robust robot control and RoboCup. He built up the RoboCup Middle-Size League Team of Graz University of Technology and works currently as its project leader. He is a member of the IEEE Robotics and Automation Society, the IEEE Computer Society and the Austrian Society for Artificial Intelligence. Moreover, he is co-founder and member of the Austrian RoboCup National Chapter.
Rights and permissions
About this article
Cite this article
Burgsteiner, H., Kröll, M., Leopold, A. et al. Movement prediction from real-world images using a liquid state machine. Appl Intell 26, 99–109 (2007). https://doi.org/10.1007/s10489-006-0007-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-006-0007-1