Advertisement

Real-Time Power Performance Prediction in Tour de France

  • Yasuyuki KataokaEmail author
  • Peter Gray
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11330)

Abstract

This paper introduces the real-time machine learning system to predict power performance of professional riders at Tour de France. In cycling races, it is crucial not only for athletes to understand their power output but for cycling fans to enjoy the power usage strategy too. However, it is difficult to obtain the power information from each rider due to its competitive sensitivity. This paper discusses a machine learning module that predicts power using the GPS data with the focus on feature design and latency issue. First, the proposed feature design method leverages both hand-crafted feature engineering using physics knowledge and automatic feature generation using autoencoder. Second, the various machine learning models are compared and analyzed with the latency constraints. As a result, our proposed method reduced prediction error by 56.79% compared to the conventional physics model and satisfied the latency requirement. Our module was used during the Tour de France 2017 to indicate an effort index that was shared with fans via media.

Keywords

Machine learning Recurrent neural network Autoencoder Real-time system Spatiotemporal data Sports analytics 

References

  1. 1.
    Castronovo, A.M., Conforto, S., Schmid, M., Bibbo, D., D’Alessio, T.: How to assess performance in cycling: the multivariate nature of influencing factors and related indicators. Front. Physiol. 4, 116 (2013)CrossRefGoogle Scholar
  2. 2.
    Abbiss, C.R., Laursen, P.B.: Models to explain fatigue during prolonged endurance cycling. Sports Med. 35(10), 865–898 (2005)CrossRefGoogle Scholar
  3. 3.
    Theurel, J., Crepin, M., Foissac, M., Temprado, J.J.: Effects of different pedalling techniques on muscle fatigue and mechanical efficiency during prolonged cycling. Scand. J. Med. Sci. Sports 22(6), 714–721 (2012)CrossRefGoogle Scholar
  4. 4.
    Martin, J.C., Milliken, D.L., Cobb, J.E., McFadden, K.L., Coggan, A.R.: Validation of a mathematical model for road cycling power. J. Appl. Biomech. 14(3), 276–291 (1998)CrossRefGoogle Scholar
  5. 5.
    Kataoka, Y., Junkins, D.: Mining muscle use data for fatigue reduction in IndyCar. In: 11th Annual MIT Sloan Sports Analytics Conference (2017). https://www.semanticscholar.org/paper/Mining-Muscle-Use-Data-for-Fatigue-Reduction-in-Kataoka-Junkins/abac602b26f7dd97655aa6443efde0ab9e90b186
  6. 6.
    Zheng, Y., Liu, L., Wang, L., Xie, X.: Learning transportation mode from raw GPS data for geographic applications on the web. In: Proceedings of the 17th International Conference on World Wide Web, pp. 247–256. ACM, April 2008Google Scholar
  7. 7.
    Endo, Y., Toda, H., Nishida, K., Ikedo, J.: Classifying spatial trajectories using representation learning. Int. J. Data Sci. Anal. 2(3–4), 107–117 (2016)CrossRefGoogle Scholar
  8. 8.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  9. 9.
    Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM, July 2008Google Scholar
  10. 10.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11(Dec), 3371–3408 (2010)Google Scholar
  12. 12.
    Liaw, A., Wiener, M.: Classification and regression by randomForest. R news 2(3), 18–22 (2002)Google Scholar
  13. 13.
    Chen, T., Guestrin, C.: Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM, August 2016Google Scholar
  14. 14.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  15. 15.
    Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Gated feedback recurrent neural networks. In: International Conference on Machine Learning, pp. 2067–2075, June 2015Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.NTT Innovation Institute Inc.East Palo AltoUSA
  2. 2.Dimension Data AustraliaPort MelbourneAustralia

Personalised recommendations