Probabilistic Progress Bars

  • Martin KiefelEmail author
  • Christian Schuler
  • Philipp Hennig
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8753)


Predicting the time at which the integral over a stochastic process reaches a target level is a value of interest in many applications. Often, such computations have to be made at low cost, in real time. As an intuitive example that captures many features of this problem class, we choose progress bars, a ubiquitous element of computer user interfaces. These predictors are usually based on simple point estimators, with no error modelling. This leads to fluctuating behaviour confusing to the user. It also does not provide a distribution prediction (risk values), which are crucial for many other application areas. We construct and empirically evaluate a fast, constant cost algorithm using a Gauss-Markov process model which provides more information to the user.


Completion Time Gaussian Process Gaussian Process Regression Gaussian Process Model Constant Cost 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Ajne, B., Daleniua, T.: Några tillämpningar av statistika ideer på numerisk integration. Nordisk Math. Tidskrift 8, 145–152 (1960)zbMATHGoogle Scholar
  2. 2.
    Bishop, C.: Pattern Recognition and Machine Learning. Springer, Berlin (2006)zbMATHGoogle Scholar
  3. 3.
    Diaconis, P.: Bayesian numerical analysis. Stat. Decis. Theory Relat. Top. IV 1, 163–175 (1998)Google Scholar
  4. 4.
    Etter, V., Grossglauser, M., Thiran, P.: Launch hard or go home!: predicting the success of kickstarter campaigns. In: Proceedings of the First ACM Conference on Online Social Networks, COSN ’13, pp. 177–182. ACM, New York (2013)Google Scholar
  5. 5.
    Garnett, R., Osborne, M., Hennig, P.: Active learning of linear embeddings for Gaussian processes. In: Uncertainty in Artificial Intelligence (2014)Google Scholar
  6. 6.
    Minka, T.: Deriving quadrature rules from Gaussian processes. Technical report, Statistics Department, Carnegie Mellon University (2000)Google Scholar
  7. 7.
    Osborne, M., Duvenaud, D., Garnett, R., Rasmussen, C., Roberts, S., Ghahramani, Z.: Active learning of model evidence using bayesian quadrature. In: Advances in NIPS, pp. 46–54 (2012)Google Scholar
  8. 8.
    Peres, S., Kortum, P., Stallmann, K.: Auditory progress bars: preference, performance and aesthetics. In: Proceedings of the 13th International Conference on Auditory Display, Montreal, Canada, 26–29 June 2007Google Scholar
  9. 9.
    Rasmussen, C., Williams, C.: Gaussian Processes for Machine Learning. MIT, Cambridge (2006)zbMATHGoogle Scholar
  10. 10.
    Rogers, L., Williams, D.: Diffusions, Markov Processes and Martingales, vol. 1: Foundations, 2nd edn. Cambridge University Press, Cambridge (2000)CrossRefzbMATHGoogle Scholar
  11. 11.
    Särkkä, S.: Bayesian Filtering and Smoothing, vol. 3. Cambridge University Press, Cambridge (2013)CrossRefzbMATHGoogle Scholar
  12. 12.
    Snelson, E., Rasmussen, C., Ghahramani, Z.: Warped Gaussian processes. In: Advances in Neural Information Processing Systems (2004)Google Scholar
  13. 13.
    Uhlenbeck, G., Ornstein, L.: On the theory of the brownian motion. Phys. Rev. 36(5), 823 (1930)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Martin Kiefel
    • 1
    Email author
  • Christian Schuler
    • 1
  • Philipp Hennig
    • 1
  1. 1.Max Planck Institute for Intelligent SystemsTübingenGermany

Personalised recommendations