Feature extraction from telematics car driving heatmaps

Abstract

Insurance companies have started to collect high-frequency GPS car driving data to analyze the driving styles of their policyholders. In previous work, we have introduced speed and acceleration heatmaps. These heatmaps were categorized with the K-means algorithm to differentiate varying driving styles. In many situations it is useful to have low-dimensional continuous representations instead of unordered categories. In the present work we use singular value decomposition and bottleneck neural networks (autoencoders) for principal component analysis. We show that a two-dimensional representation is sufficient to re-construct the heatmaps with high accuracy (measured by Kullback–Leibler divergences).

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

References

  1. 1.

    Ayuso M, Guillen M, Pérez-Marín AM (2016). Telematics and gender discrimination: some usage-based evidence on whether men’s risk of accidents differs from women’s. Risks 4/2, article 10

  2. 2.

    Gao G, Meng S, Wüthrich MV (2018) Claims frequency modeling using telematics car driving data. Scand Actuarial. https://doi.org/10.1080/03461238.2018.1523068 (to appear)

    Article  Google Scholar 

  3. 3.

    Hainaut D (2018) A neural-network analyzer for mortality forecast. ASTIN Bull 48(2):481–508

    MathSciNet  Article  Google Scholar 

  4. 4.

    Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning. Data mining, inference, and prediction, 2nd edn. Springer Series in Statistics, Berlin

    Google Scholar 

  5. 5.

    Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313:504–507

    MathSciNet  Article  Google Scholar 

  6. 6.

    Kramer MA (1991) Nonlinear principal component analysis using autoassociative neural networks. AIChE J 37(2):233–243

    Article  Google Scholar 

  7. 7.

    Liou CY, Cheng CW, Liou JW, Liou DR (2014) Autoencoders for words. Neurocomputing 139:84–96

    Article  Google Scholar 

  8. 8.

    Verbelen R, Antonio K, Claeskens G (2018) Unraveling the predictive power of telematics data in car insurance pricing. J Roy Stat Soc Ser C (Appl Stat) (to appear)

  9. 9.

    Weidner W, Transchel FWG, Weidner R (2016) Classification of scale-sensitive telematic observables for riskindividual pricing. Eur Actuar J 6(1):3–24

    MathSciNet  Article  Google Scholar 

  10. 10.

    Weidner W, Transchel FWG, Weidner R (2016) Telematic driving profile classification in car insurance pricing. Ann Actuar Sci 11(2):213–236

    Article  Google Scholar 

  11. 11.

    Wüthrich (2017) Covariate selection from telematics car driving data. Eur Actuar J 7(1):89–108

    MathSciNet  Article  Google Scholar 

  12. 12.

    Wüthrich MV, Buser C (2016) Data analytics for non-life insurance pricing. SSRN Manuscript ID 2870308. Version October 25, 2017

Download references

Acknowledgements

Guangyuan Gao: Financially supported by the Social Science Fund of China (Grant no. 16ZDA052) and MOE National Key Research Bases for Humanities and Social Sciences (Grant no. 16JJD910001).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Mario V. Wüthrich.

Appendix: KL divergence, revisited

Appendix: KL divergence, revisited

In this appendix we briefly revisit the KL divergence. Denote by \(\mathcal{X} \subset {\mathbb {R}}^J\) the \((J-1)\)-unit simplex. Consider \(k\) independent and identically distributed trials among \(J\) classes providing a multinomial distribution \(\pi \in \mathcal{X}\) given by the discrete probability weights

$$\begin{aligned} p(k_1,\ldots , k_J) ~ = ~{k \atopwithdelims ()k_1, \ldots ,k_J} ~ \prod _{j=1}^J \pi _j^{k_j} ~ \mathbb {1}_{\{\sum _{j=1}^J k_j =k\}}, \end{aligned}$$

for \(k_j \in {\mathbb {N}}_0\), \(j=1,\ldots , J\). The deviance statistics of an observation \((k_1,\ldots ,k_J)\) of that multinomial distribution is given by

$$\begin{aligned} D\left( (k_1,\ldots , k_J),\pi \right)&= 2 \left[ \sum _{j=1}^J k_j \log \left( \frac{k_j}{k}\right) - \sum _{j=1}^J k_j \log \pi _j \right] \\&= 2 k \sum _{j=1}^J \frac{k_j}{k} \left( \log \left( \frac{k_j}{k}\right) - \log \pi _j \right) . \end{aligned}$$

In Sect. 3 we have defined the empirical distributions on the \((J-1)\)-unit simplex by setting \(x_j=k_j/k\) which, of course, provides \(\varvec{x}=(x_1,\ldots , x_J)'\in \mathcal{X}\). Doing this transformation we receive

$$\begin{aligned} D\left( (k_1,\ldots , k_J),\pi \right) =- 2 k \sum _{j=1}^J x_j \log \frac{\pi _j}{x_j}= 2k ~d_\mathrm{KL}\left( \varvec{x}\Vert \pi \right) . \end{aligned}$$

Thus, by minimizing the KL divergence in (3.3), we minimize the corresponding deviance statistics, which provides the maximum likelihood estimator of the network parameter \(\theta\) under independent multinomial models (having \(J\) classes) for drivers \(i=1,\ldots , n\). This additionally assumes that all drivers have identical weights \(k\). If the latter is not appropriate we may replace the average KL divergence in (3.3) by a weighted counterpart

$$\begin{aligned} \mathcal{L}^w_\mathrm{KL}\left( \theta , (\varvec{x}_i)_{i=1:n}\right) = \sum _{i=1}^n w_i~ d_\mathrm{KL}\left( \varvec{x}_i \Vert \pi (\varvec{x}_i)\right) , \end{aligned}$$

with weights \(w_i\ge 0\) satisfying \(\sum _{i=1}^n w_i=1\).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gao, G., Wüthrich, M.V. Feature extraction from telematics car driving heatmaps. Eur. Actuar. J. 8, 383–406 (2018). https://doi.org/10.1007/s13385-018-0181-7

Download citation

Keywords

  • Telematics car driving data
  • Driving styles
  • Unsupervised learning
  • Pattern recognition
  • Image recognition
  • Bottleneck neural network
  • Autoencoder
  • Singular value decomposition
  • Principal component analysis
  • K-means algorithm
  • Kullback–Leibler divergence