A Guide to the NeurIPS 2018 Competitions

  • Ralf Herbrich
  • Sergio EscaleraEmail author
Conference paper
Part of the The Springer Series on Challenges in Machine Learning book series (SSCML)


Competitions have become an integral part of advancing state-of-the-art in artificial intelligence (AI). They exhibit one important difference to benchmarks: Competitions test a system end-to-end rather than evaluating only a single component; they assess the practicability of an algorithmic solution in addition to assessing feasibility. In this volume, we present the details of eight competitions in the area of AI which took place between February to December 2018 and were presented at the Neural Information Processing Systems conference in Montreal, Canada on December 8, 2018. The competitions ranged from challenges in Robotics, Computer Vision, Natural Language Processing, Games, Health, Systems to Physics.


Competitions Robotics Computer vision Natural language processing Health Games Systems Physics 


  1. 1.
    Claire Adam-Bourdarios, G Cowan, Cecile Germain-Renaud, Isabelle Guyon, Balázs Kégl, and D Rousseau. The Higgs machine learning challenge. In Journal of Physics: Conference Series, volume 664. IOP Publishing, 2015.Google Scholar
  2. 2.
    Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6: 14410–14430, 2018.CrossRefGoogle Scholar
  3. 3.
    Robert M. Bell and Yehuda Koren. Lessons from the Netflix prize challenge. SIGKDD Explorations, 9 (2): 75–79, 2007.CrossRefGoogle Scholar
  4. 4.
    Carl Boettiger. An introduction to Docker for reproducible research. ACM SIGOPS Operating Systems Review, 49 (1): 71–79, 2015.CrossRefGoogle Scholar
  5. 5.
    Andrew P Bradley. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30 (7): 1145–1159, 1997.CrossRefGoogle Scholar
  6. 6.
    Jennifer Carpenter. May the best analyst win. Science, 331 (6018): 698–699, 2011.CrossRefGoogle Scholar
  7. 7.
    Corinna Cortes and Vladimir Vapnik. Support-Vector Networks. Machine Learning, 20 (3): 273–297, 1995.zbMATHGoogle Scholar
  8. 8.
    Lehel Csató and Manfred Opper. Sparse on-line Gaussian processes. Neural Computation, 14 (3): 641–668, 2002.CrossRefGoogle Scholar
  9. 9.
    Sergio Escalera and Markus Weimer. The NIPS’17 Competition: Building Intelligent Systems. Springer, 2018.CrossRefGoogle Scholar
  10. 10.
    Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1 (4): 541–551, 1989.CrossRefGoogle Scholar
  11. 11.
    Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Technical report, OpenAI, 2018. URL
  12. 12.
    David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323 (6088): 533–536, 1986.CrossRefGoogle Scholar
  13. 13.
    Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Amazon (Berlin)BerlinGermany
  2. 2.Universitat de Barcelona and Computer Vision CenterBarcelonaSpain

Personalised recommendations