Advertisement

Adaptive Long-Term Ensemble Learning from Multiple High-Dimensional Time-Series

  • Samaneh KhoshrouEmail author
  • Mykola Pechenizkiy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11828)

Abstract

Learning from multiple time-series over an unbounded time-frame has received less attention despite the key applications (such as video analysis, home-assisted) generating this data. Inspired by never-ending approaches, this paper presents an algorithm to continuously learn from multiple high-dimensional un-regulated time-series, in a framework based on ensembles which with respect to drift level develops over time in order to reflect the latest concepts. Here, we explicitly look into video surveillance problem as one of the main sources of high-dimensional data in daily life and extensive experiments are conducted on multiple datasets, that demonstrate the advantages of the proposed framework in terms of accuracy and complexity over several baseline approaches.

Keywords

Long-term learning Data streams Ensembles 

Notes

Acknowledgement

The authors would like to thank the Dutch Research Council (NWO) for supporting this project.

References

  1. 1.
    Begum, N., Keogh, E.: Rare time series motif discovery from unbounded streams. Proc. VLDB Endow. 8(2), 149–160 (2014)CrossRefGoogle Scholar
  2. 2.
    Berlin, E., Van Laerhoven, K.: Detecting leisure activities with dense motif discovery. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pp. 250–259 (2012)Google Scholar
  3. 3.
    Bialkowski, A., Denman, S., Sridharan, S., Fookes, C., Lucey, P.: A database for person re-identification in multi-camera surveillance networks. In: 2012 International Conference on Digital Image Computing Techniques and Applications, pp. 1–8 (2012)Google Scholar
  4. 4.
    Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Jr., E.R.H., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI (2010)Google Scholar
  5. 5.
    Chen, X., Shrivastava, A., Gupta, A.: Neil: extracting visual knowledge from web data. In: IEEE International Conference on Computer Vision, pp. 1409–1416 (2013)Google Scholar
  6. 6.
    Elwell, R., Polikar, R.: Incremental learning of concept drift in nonstationary environments. IEEE Trans. Neural Netw. 22(10), 1517–1531 (2011)CrossRefGoogle Scholar
  7. 7.
    Perronnin, F., Sánchez, J., Mensink, T.: Improving the fisher kernel for large-scale image classification. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 143–156. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15561-1_11CrossRefGoogle Scholar
  8. 8.
    Goldberger, J., Aronowitz, H.: A distance measure between GMMS based on the unscented transform and its application to speaker recognition. In: INTERSPEECH 2005 - Eurospeech, 9th European Conference on Speech Communication and Technology, Lisbon, Portugal, 4–8 September 2005, pp. 1985–1988 (2005)Google Scholar
  9. 9.
    Gong, S., Cristani, M., Yan, S., Loy, C.C. (eds.): Person Re-Identification. Advances in Computer Vision and Pattern Recognition. Springer, London (2014).  https://doi.org/10.1007/978-1-4471-6296-4CrossRefGoogle Scholar
  10. 10.
    Hao, Y., Chen, Y., Zakaria, J., Hu, B., Rakthanmanon, T., Keogh, E.: Towards never-ending learning from time series streams. In: KDD 2013, pp. 874–882 (2013)Google Scholar
  11. 11.
    Hennig, C.: Methods for merging gaussian mixture components. Adv. Data Anal. Classif. 4(1), 3–34 (2010)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Hua, X.S., Li, J.: Prajna: towards recognizing whatever you want from images without image labeling. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI 2015, pp. 137–144 (2015)Google Scholar
  13. 13.
    Karanam, S., Li, Y., Radke, R.J.: Person re-identification with discriminatively trained viewpoint invariant dictionaries. In: 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, pp. 4516–4524 (2015)Google Scholar
  14. 14.
    Khoshrou, A., Aguiar, A.P., Pereira, F.L.: Adaptive sampling using an unsupervised learning of GMMS applied to a fleet of auvs with CTD measurements. In: Second Iberian Robotics Conference, pp. 321–332 (2016)Google Scholar
  15. 15.
    Khoshrou, S., Cardoso, J.S., Teixeira, L.F.: Learning from evolving video streams in a multi-camera scenario. Mach. Learn. 100(2–3), 609–633 (2015)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Wang, T., Gong, S., Zhu, X., Wang, S.: Person re-identification by discriminative selection in video ranking. IEEE Trans. Pattern Anal. Mach. Intell. 38(12), 2501–2514 (2016)CrossRefGoogle Scholar
  17. 17.
    Wu, T.Y., Lu, L., Chen, K., Zhang, H.J.: UBM-based incremental speaker adaptation. In: 2003 International Conference on Multimedia and Expo, ICME 2003, Proceedings, vol. 2, pp. II-721, July 2003Google Scholar
  18. 18.
    Yi, Z.: Constructive and Destructive Optimization Methods for Predictive Ensemble Learning. University of Iowa (2006)Google Scholar
  19. 19.
    Yuyin Sun, D.F.: Neol: toward never-ending object learning for robots. In: IEEE International Conference on Robotics and Automation (ICRA) (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Eindhoven University of TechnologyEindhovenThe Netherlands

Personalised recommendations