Abstract
In this chapter, we have briefly surveyed previous work in predicting noise-free piecewise chaotic time series and noisy time series with high frequency random noise. For noise-free time series, we have proposed a constrained formulation for neural network learning that incorporates the error of each learning pattern as a constraint, a new cross-validation scheme that allows multiple validations sets to be considered in learning, a recurrent FIR neural network architecture that combines a recurrent structure and a memory-based FIR structure, and a violation-guided back propagation algorithm for searching in the constrained space of the formulation. For noisy time series, we have studied systematically the edge effect due to low-pass filtering of noisy time series and have developed an approach that incorporates constraints on predicting low-pass data in the lag period. The new constraints enable active training in the lag period that greatly improves the prediction accuracy in the lag period. Finally, experimental results show significant improvements in prediction accuracy on standard benchmarks and stock price time series.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
M. Aoki: State Space Modeling of Time Series ( Springer-Verlag, Nerlin, 1987 )
A. Aussem: Dynamical recurrent neural networks towards prediction and modeling of dynamical systems. Neurocomputing, 28, 207–232 (1999)
S.D. Balkin, J.K. Ord: Automatic neural network modeling for univariate time series. Int’l J. of Forecasting, 16, 509–515 (2000)
G.E.P. Box, G.M. Jenkins. Time Series Analysis: Forecasting and Control, 2nd ed. ( Holden-Day, San Francisco, 1976 )
C. Brooks, S.P. Burke, G. Persand: Benchmarks and the accuracy of GARCH model estimation. Int’l J. of Forecasting, 17, 45–56 (2001)
R.G. Brown: Smoothing, Forecasting and Prediction ( Prentice Hall, Englewood Cliffs, NJ, 1963 )
C. Chatfield: The analysis of time series-an introduction (Chapman & Hall, London, 5 edition, 1996 )
C. Chatfield: Time-series forecasting (Chapman & Hall/CRC, Boca Raton, Florida, 2001 )
S. Chen, S. Billings, P. Grant: Non-linear system identification using neural networks. Int’l J. of Control, 51, 1191–1214 (1990)
D. Drossu, Z. Obradovic: Regime signaling techniques for non-stationary time series forecasting. In: Proc. 30th Hwaii Int’l Conf. on System Sciences ( Wailea, HI, USA, 1997 ) 5, pp. 530–538
R.O. Duda, P.E. Hart: Pattern Classification and Scene Analysis (John Wiley and Sons, 1973 )
R.D. Edwards, J. Magee: Technical Analysis of Stock Trends (John Magee, Springfield, MA, 5 edition, 1966 )
J.L. Elman: Finding structure in time. Cognitive Science, 14, 179-211 (1990) 17.14 B.E. Flores, S.L. Pearce: The use of an expert system in the M3 competition. Int’l J. of Forecasting, 16, 485–496 (2000)
A.B. Geva: ScaleNet–multiscale neural-network architecture for time series prediction. IEEE Trans. on Neural Networks, 9 (5), 1471–1482 (1998)
C.W.J. Granger, A.P. Andersen: Introduction to Bilinear Time Series Models ( Vandenhoeck & Ruprect, Göittingen, 1978 )
S. Gutjahr, M. Riedmiller, J. Klingemann: Daily prediction of the foreign exchange rate between the us dollar and the german mark using neural networks. In: Proc. of SPICES pp. 492–498 (1997)
M. Hallas, G. Dorffner: A comparative study on feedforward and recurrent neural networks in time series prediction using gradient descent learning. In: Proc. of 14th European Meeting on Cybernetics and Systems Research 2, pp. 646–647 (1998)
S. Haykin: Neural Networks: A Comprehensive Foundation (Prentice Hall, NJ, 2 edition, 1999 )
R. Hegger, T. Schreiber: The TISEAN software package. http://www.mpipks-dresden.mpg.de tisean (2002)
T. Hellstrm: Predicting a rank measure for stock returns. Theory of Stochastic Processes, 6 (20), 64–83 (2000)
T. Hellstrom, K. Holmstrom: Predicting the Stock Market. Technical Report Series IMa-TOM-1997-07, Malardalen University, Vasteras, Sweden, 1997
B.G. Horne, C.L. Giles: An experimental comparison of recurrent neural networks. In: G. Tesauro, D. Touretzky, T. Leen (eds.), Neural Information Processing Systems ( MIT Press, Cambridge, MA, 1995 ) pp. 697–704
A.K. Jain, M.N. Murty, P.J. Flynn: Data clustering: A review. ACM Computing Surveys, 31 (3), 264–323 (1999)
E.S. Gardner Jr., E.A. Anderson-Fletcher, A.M. Wicks: Further results on focus forecasting vs. exponential smoothing. Int’1 J. of Forecasting, 17, 287–293 (2001)
B.H. Juang, L.R. Rabiner: Hidden Markov models for speech recognition. Technometrics, 33, 251–272 (1991)
J. Kohlmorgen, K.R. Müller, K. Pawelzik: Analysis of drifting dynamics with neural network hidden markov models. Advances in Neural Information Processing Systems, 10 (1998)
T. Koskela, M. Lehtokangas, J. Saarinen, K. Kaski: Time series prediction with multilayer perceptron, FIR and Elman neural networks. In: Proc. of the World Congress on Neural Networks pp. 491–496 (1996)
K.J. Lang, G.E. Hinton: The development of the time-delayed neural network architecture for speech recognition. Technical Report #CMU-CS-88-152, Carnegie-Mellon University, Pittsburgh, PA, 1988
S. Makridakis, A. Andersen, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler: The accuracy of extrapolation (time series) methods: results of a forecasting competition. Int’l J. of Forecasting, 1, 111–153 (1982)
S. Makridakis, C. Chatfield, M. Hibon, M. Lawrence, T. Mills, K. Ord, L.F. Simmons: The M2-Competition: a real-time judgementally based forecasting study. Int’l J. of Forecasting, 9, 5–23 (1993)
S. Makridakis, M. Hibon: The M3-Competition: results, conclusions and implications. Int’l J. of Forecasting, 16, 451–476 (2000)
T. Masters: Neural, Novel and Hybrid Algorithms for Time Series Prediction ( John Wiley & Sons, Inc., NY, 1995 )
N. Meade: A comparison of the accuracy of short term foreign exchange forecasting methods. Int’l J. of Forecasting, 18, 67–83 (2002)
G. Melard, J.M. Pasteels: Automatic arima modeling including interventions, using time series expert software. Int’l J. of Forecasting, 16, 497–508 (2000)
J. Moody, C. Darken: Fast learning in networks of locally-tuned processing units. Neural Computation, 1 (2), 281–294 (1989)
K. Müller, A. Smola, G. Rätsch, B. Schölkopf, J. Kohlmorgen, and V. Vapnik: Predicting time series with support vector machines. In: ICANN pp. 999–1004 (1997)
F. Murtagh, A. Aussem: Using the wavelet transform for multivariate data analysis and time series forecasting. In: C. Hayashi, H.H. Bock, K. Yajima, Y. Tanaka, N. Ohsumi, Y. Baba, editors, Data Science, Classification and Related Methods pp. 617-624 (Springer-Verlag, 1998 )
D.F. Nicholls, A.R. Pagan: Varying coefficient regression. In: E.J. Hannan, P.R. Krishnaiah, M.M. Rao (eds.), Handbook of Statistics ( North-Holland, Amsterdam, 1985 ) pp. 413–449
J.R. Quinlan: Induction of decision trees. Machine Learning, 1, 81–106 (1986).
S. Ramaswamy: One-step prediction of financial time series, BIS Working Pa-per No. 57. Technical report, Bank for Interal Settlements, Basle, Switzerland, 1998
Y. Shang, B.W. Wah: Global optimization for neural network training. IEEE Computer, 29, 45–54 (March 1996)
H. Tong: Nonlinear Time Series: A Dynamical System Approach ( Oxford University Press, Oxford, 1990 )
B.W. Wah, Y.X. Chen: Constrained genetic algorithms and their applications in nonlinear constrained optimization. In: Proc. Int’l Conf. on Tools with Artificial Intelligence ( IEEE, November 2000 ) pp. 286–293
B.W. Wah, M.L. Qian: Constrained formulations for neural network training and their applications to solve the two-spiral problem. In: Proc. Fifth Int’l Conf. on Computer Science and Informatics 1, pp. 598–601 (February 2000)
B.W. Wah, M.L. Qian: Time-series predictions using constrained formulations for neural-network training and cross validation. In: Proc. Int’l Conf. on Intelligent Information Processing, 16th IFIP World Computer Congress ( Kluwer Academic Press, August 2000 ) pp. 220–226
B.W. Wah, M.L. Qian: Violation-guided learning for constrained formulations in neural network time series prediction. In: Proc. Int’l Joint Conference on Artificial Intelligence ( IJCAI, Aug. 2001 ) pp. 771–776
B.W. Wah, M.L. Qian: Violation guided neural-network learning fo constrained formulations in time-series predictions. Int’l Journal on Computational Intelligence and Applications, 1 (4), 383–398 (December 2001)
B.W. Wah, M.L. Qian: Constrained formulations and algorithms for stock price predictions using recurrent FIR neural networks. In: Proc. 2002 National Conf. on Artificial Intelligence (AAAI, 2002)(accepted to appear)
B.W. Wah, Z. Wu: The theory of discrete Lagrange multipliers for nonlin-ear discrete optimization. Principles and Practice of Constraint Programming, pp. 28–42 (October 1999)
E.A. Wan: Temporal backpropagation for FIR neural networks. IEEE Int’l Joint Conf. on Neural Networks, 1, pp. 575-580 ( San Diego, CA., 1990 )
E.A. Wan: Finite Impulse Response Neural Networks with Applications in Time Series Prediction. Ph.D. Thesis, Standford University, 1993
C.J. Watkins: Models of Delayed Reinforcement Learning. Ph.D. thesis, Cam-bridge University (Cambridge, UK, 1989 )
A.S. Weigend, N.A. Gershenfeld (eds.): Time Series Prediction: Forecasting the future and understanding the past (Addison-Wesley, 1994 )
R.J. Williams, D. Zipser: A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1, 270–280 (1989)
Z. Wu: The Theory and Applications of Nonlinear Constrained Optimization using Lagrange Multipliers. Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL (May 2001)
B.L. Zhang, R. Coggins, M.A. Jabri, D. Dersch, B. Flower: Multiresolution forecasting for future trading using wavelet decompositions. IEEE Trans. on Neural Networks, 12, 766–775 (2001)
G. Zheng, J.L. Starck, J.G. Campbell, F. Murtagh: Multiscale transforms for filtering financial data streams. J. of Computational Intelligence in Finance, 7, 18–35 (1999)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Wah, B.W., Qian, M. (2004). Constraint-Based Neural Network Learning for Time Series Predictions. In: Intelligent Technologies for Information Analysis. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-07952-2_17
Download citation
DOI: https://doi.org/10.1007/978-3-662-07952-2_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-07378-6
Online ISBN: 978-3-662-07952-2
eBook Packages: Springer Book Archive