Applying Prototype Selection and Abstraction Algorithms for Efficient Time-Series Classification

  • Stefanos OugiaroglouEmail author
  • Leonidas Karamitopoulos
  • Christos Tatoglou
  • Georgios Evangelidis
  • Dimitris A. Dervos
Part of the Springer Series in Bio-/Neuroinformatics book series (SSBN, volume 4)


A widely used time series classification method is the single nearest neighbour. It has been adopted in many time series classification systems because of its simplicity and effectiveness. However, the efficiency of the classification process depends on the size of the training set as well as on data dimensionality. Although many speed-up methods for fast time series classification have been proposed and are available in the literature, state-of-the-art, non-parametric prototype selection and abstraction data reduction techniques have not been exploited on time series data. In this work, we present an experimental study where known prototype selection and abstraction algorithms are evaluated both on original data and a dimensionally reduced representation form of the same data from seven popular time series datasets. The experimental results demonstrate that prototype selection and abstraction algorithms, even when applied on dimensionally reduced data, can effectively reduce the computational cost of the classification process and the storage requirements for the training data, and, in some cases, improve classification accuracy.


Reduction Rate Concept Drift Neighbor Rule Training Item Prototype Selection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aguilar, J.S., Riquelme, J.C., Toro, M.: Data set editing by ordered projection. Intell. Data Anal. 5(5), 405–417 (2001), zbMATHGoogle Scholar
  2. 2.
    Aha, D.W.: Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms. Int. J. Man-Mach. Stud. 36(2), 267–287 (1992),, doi:10.1016/0020-7373(92)90018-GCrossRefGoogle Scholar
  3. 3.
    Aha, D.W., Kibler, D., Albert, M.K.: Instance-based learning algorithms. Mach. Learn. 6(1), 37–66 (1991),, doi:10.1023/A:1022689900470Google Scholar
  4. 4.
    Angiulli, F.: Fast condensed nearest neighbor rule. In: Proceedings of the 22nd International Conference on Machine Learning, ICML 2005, pp. 25–32. ACM, New York (2005)Google Scholar
  5. 5.
    Angiulli, F.: Fast nearest neighbor condensation for large data sets classification. IEEE Trans. on Knowl. and Data Eng. 19(11), 1450–1464 (2007),, doi:10.1109/TKDE.2007.190645CrossRefGoogle Scholar
  6. 6.
    Beringer, J., Hüllermeier, E.: Efficient instance-based learning on data streams. Intell. Data Anal. 11(6), 627–650 (2007), Google Scholar
  7. 7.
    Brighton, H., Mellish, C.: Advances in instance selection for instance-based learning algorithms. Data Min. Knowl. Discov. 6(2), 153–172 (2002),, doi:10.1023/A:1014043630878CrossRefzbMATHMathSciNetGoogle Scholar
  8. 8.
    Buza, K., Nanopoulos, A., Schmidt-Thieme, L.: Insight: efficient and effective instance selection for time-series classification. In: Huang, J.Z., Cao, L., Srivastava, J. (eds.) PAKDD 2011, Part II. LNCS (LNAI), vol. 6635, pp. 149–160. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  9. 9.
    Chen, C.H., Jóźwik, A.: A sample set condensation algorithm for the class sensitive artificial neural network. Pattern Recogn. Lett. 17(8), 819–823 (1996),, doi:10.1016/0167-8655(96)00041-4CrossRefGoogle Scholar
  10. 10.
    Chou, C.H., Kuo, B.H., Chang, F.: The generalized condensed nearest neighbor rule as a data reduction method. In: Proceedings of the 18th International Conference on Pattern Recognition, ICPR 2006, vol. 02, pp. 556–559. IEEE Computer Society, Washington, DC (2006),, doi:10.1109/ICPR.2006.1119Google Scholar
  11. 11.
    Devi, V.S., Murty, M.N.: An incremental prototype set building technique. Pattern Recognition 35(2), 505–513 (2002)CrossRefzbMATHGoogle Scholar
  12. 12.
    Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E.: Querying and mining of time series data: experimental comparison of representations and distance measures. Proc. VLDB Endow. 1(2), 1542–1552 (2008), CrossRefGoogle Scholar
  13. 13.
    Fayed, H.A., Atiya, A.F.: A novel template reduction approach for the k-nearest neighbor method. Trans. Neur. Netw. 20(5), 890–896 (2009),, doi:10.1109/TNN.2009.2018547CrossRefGoogle Scholar
  14. 14.
    Garcia, S., Derrac, J., Cano, J., Herrera, F.: Prototype selection for nearest neighbor classification: Taxonomy and empirical study. IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 417–435 (2012),, doi:10.1109/TPAMI.2011.142CrossRefGoogle Scholar
  15. 15.
    Gates, G.W.: The reduced nearest neighbor rule. IEEE Transactions on Information Theory 18(3), 431–433 (1972)CrossRefGoogle Scholar
  16. 16.
    Grochowski, M., Jankowski, N.: Comparison of instance selection algorithms ii. results and comments. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 580–585. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  17. 17.
    Hart, P.E.: The condensed nearest neighbor rule. IEEE Transactions on Information Theory 14(3), 515–516 (1968)CrossRefGoogle Scholar
  18. 18.
    Jankowski, N., Grochowski, M.: Comparison of instances seletion algorithms i. algorithms survey. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 598–603. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  19. 19.
    Keogh, E.J., Pazzani, M.J.: A simple dimensionality reduction technique for fast similarity search in large time series databases. In: Terano, T., Liu, H., Chen, A.L.P. (eds.) PAKDD 2000. LNCS (LNAI), vol. 1805, pp. 122–133. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  20. 20.
    Lozano, M.: Data Reduction Techniques in Classification processes. (Phd Thesis). Universitat Jaume I (2007)Google Scholar
  21. 21.
    Olvera-López, J.A., Carrasco-Ochoa, J.A., Martínez-Trinidad, J.F., Kittler, J.: A review of instance selection methods. Artif. Intell. Rev. 34(2), 133–143 (2010),, doi:10.1007/s10462-010-9165-yCrossRefGoogle Scholar
  22. 22.
    Ougiaroglou, S., Evangelidis, G.: Efficient data abstraction using weighted IB2 prototypes. Computer Science and Information Systems 11(2), 665–678 (2014)CrossRefGoogle Scholar
  23. 23.
    Ougiaroglou, S., Evangelidis, G.: RHC: Non-parametric cluster-based data reduction for efficient k-nn classification. Pattern Analysis and Applications (accepted, 2014)Google Scholar
  24. 24.
    Ougiaroglou, S., Evangelidis, G.: Efficient dataset size reduction by finding homogeneous clusters. In: Proceedings of the Fifth Balkan Conference in Informatics, BCI 2012, pp. 168–173. ACM, New York (2012),, doi:10.1145/2371316.2371349Google Scholar
  25. 25.
    Ougiaroglou, S., Evangelidis, G.: AIB2: An abstraction data reduction technique based on IB2. In: Proceedings of the 6th Balkan Conference in Informatics, BCI 2013, pp. 13–16. ACM, New York (2013),, doi:10.1145/2490257.2490260Google Scholar
  26. 26.
    Riquelme, J.C., Aguilar-Ruiz, J.S., Toro, M.: Finding representative patterns with ordered projections. Pattern Recognition 36(4), 1009–1018 (2003),, doi:
  27. 27.
    Ritter, G., Woodruff, H., Lowry, S., Isenhour, T.: An algorithm for a selective nearest neighbor decision rule. IEEE Trans. on Inf. Theory 21(6), 665–669 (1975)CrossRefzbMATHGoogle Scholar
  28. 28.
    Sánchez, J.S.: High training set size reduction by space partitioning and prototype abstraction. Pattern Recognition 37(7), 1561–1564 (2004)CrossRefGoogle Scholar
  29. 29.
    Tomek, I.: Two modifications of cnn. IEEE Transactions on Systems, Man and Cybernetics SMC-6(11), 769–772 (1976), doi:10.1109/TSMC.1976.4309452MathSciNetGoogle Scholar
  30. 30.
    Toussaint, G.: Proximity graphs for nearest neighbor decision rules: Recent progress. In: 34th Symposium on the INTERFACE, pp. 17–20 (2002)Google Scholar
  31. 31.
    Triguero, I., Derrac, J., Garcia, S., Herrera, F.: A taxonomy and experimental study on prototype generation for nearest neighbor classification. Trans. Sys. Man Cyber Part C 42(1), 86–100 (2012),, doi:10.1109/TSMCC.2010.2103939CrossRefGoogle Scholar
  32. 32.
    Tsymbal, A.: The problem of concept drift: definitions and related work. Tech. Rep. TCD-CS-2004-15, The University of Dublin, Trinity College, Department of Computer Science, Dublin, Ireland (2004)Google Scholar
  33. 33.
    Wilson, D.R., Martinez, T.R.: Reduction techniques for instance-basedlearning algorithms. Mach. Learn. 38(3), 257–286 (2000),, doi:10.1023/A:1007626913721CrossRefzbMATHGoogle Scholar
  34. 34.
    Xi, X., Keogh, E., Shelton, C., Wei, L., Ratanamahatana, C.A.: Fast time series classification using numerosity reduction. In: Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, pp. 1033–1040. ACM, New York (2006),, doi:10.1145/1143844.1143974
  35. 35.
    Yi, B.K., Faloutsos, C.: Fast time sequence indexing for arbitrary lp norms. In: Proceedings of the 26th International Conference on Very Large Data Bases, VLDB 2000, pp. 385–394. Morgan Kaufmann Publishers Inc., San Francisco (2000), Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Stefanos Ougiaroglou
    • 1
    Email author
  • Leonidas Karamitopoulos
    • 2
  • Christos Tatoglou
    • 2
  • Georgios Evangelidis
    • 1
  • Dimitris A. Dervos
    • 2
  1. 1.Department of Applied Informatics, School of Information SciencesUniversity of MacedoniaThessalonikiGreece
  2. 2.Information Technology DepartmentAlexander TEI of ThessalonikiSindosGreece

Personalised recommendations