Skip to main content

Part of the book series: Springer Handbooks ((SHB))

Abstract

This chapter surveys the practices that are being employed in experimentally assessing the special class of computational models embedded in robots. The assessment of these models is particularly challenging mainly due to the difficulty of accurately estimating and modeling the interactions between the robots and their environments, especially in the case of autonomous robots, which make decisions without continuous human supervision. The field of autonomous robotics has recognized this difficulty and launched a number of initiatives to deal with it. This chapter, after a conceptual premise and a broad introduction to the experimental issues of robotics, critically reviews these initiatives that range from taking inspiration from traditional experimental practices, to simulations, benchmarking, standards, and competitions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 269.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 349.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abbreviations

AAMAS:

autonomous agents and multiagent systems

AI:

artificial intelligence

GEM:

good experimental methodology

References

  1. M. Scheutz: Computation, philosophical issues about. In: The Encyclopedia of Cognitive Science, ed. by L. Nadel (Wiley, Hoboken 2002)

    Google Scholar 

  2. P. Vermaas, P. Kroes, I. van de Poel, M. Franssen, W. Houkes: A philosophy of technology. In: From Technical Artefacts to Sociotechnical Systems, (Morgan Claypool, San Rafael 2011)

    Google Scholar 

  3. S.O. Hansson: Experiments before science? – What science learned from technological experiments. In: The Role of Technology in Science. Philosophical Perspectives, ed. by S.O. Hansson (Springer, Dordrecht 2015)

    Chapter  Google Scholar 

  4. F. Amigoni, M. Reggiani, V. Schiaffonati: An insightful comparison between experiments in mobile robotics and in science, Auton. Robot. 27(4), 313–325 (2009)

    Article  Google Scholar 

  5. F. Amigoni, V. Schiaffonati, M. Verdicchio: Good experimental methodologies for autonomous robotics: From theory to practice. In: Methods and Experimental Techniques in Computer Engineering, Springer Briefs in Applied Sciences and Technology, ed. by F. Amigoni, V. Schiaffonati (Springer, Cham 2014) pp. 37–53

    Chapter  Google Scholar 

  6. A. Newell, H. Simon: Computer science as empirical inquiry: Symbols and search, Commun. ACM 19(3), 113–126 (1976)

    Article  MathSciNet  Google Scholar 

  7. J.A. Feldman, W.R. Sutherland: Rejuvenating experimental computer science, Commun. ACM 22(9), 497–502 (1979)

    Article  Google Scholar 

  8. P.J. Denning: What is experimental computer science?, Commun. ACM 23(10), 543–544 (1980)

    Article  Google Scholar 

  9. W. Tichy: Should computer scientists experiment more?, Computer 31(5), 32–40 (1998)

    Article  MathSciNet  Google Scholar 

  10. P. Freeman: Back to experimentation, Commun. ACM 51(1), 21–22 (2008)

    Article  Google Scholar 

  11. P. Denning, P. Freeman: Computing’s paradigm, Commun. ACM 52(12), 28–30 (2005)

    Article  Google Scholar 

  12. C. Morrison, R. Snodgrass: Computer science can use more science, Commun. ACM 54(6), 38–43 (2011)

    Article  Google Scholar 

  13. P.J. Denning: Is computer science science?, Commun. ACM 48(4), 27–31 (2005)

    Article  MathSciNet  Google Scholar 

  14. D.G. Feitelson: Experimental computer science: The need for a cultural change (2006), manuscript, http://www.cs.huji.ac.il/~feit/papers/exp05.pdf, last accessed August 2016

  15. M.V. Zelkowitz, D.R. Wallace: Experimental validation in software engineering, Inf. Softw. Technol. 39(11), 735–743 (1997)

    Article  Google Scholar 

  16. M.V. Zelkowitz, D.R. Wallace: Experimental models for validating technology, Computer 31(5), 23–31 (1998)

    Article  Google Scholar 

  17. W. Harrison, V.R. Basili: Editorial, Empir. Softw. Eng. 1(1), 5–10 (1996)

    Article  Google Scholar 

  18. M. Barni, F. Perez-Gonzalez, P. Comesana, G. Bartoli: Putting reproducible signal processing into practice: A case study in watermarking, Proc. IEEE ICASSP 4, 1261–1264 (2007)

    Google Scholar 

  19. P. Vandewalle, J. Kovacevic, M. Vetterli: Reproducible research in signal processing, IEEE Signal Process. Mag. 26(3), 37–47 (2009)

    Article  Google Scholar 

  20. B. Mayer, M. Nordio: Empirical Software Engineering and Verification, Programming and Software Engineering, Vol. 7007 (Springer, Berlin, Heidelberg 2010)

    Google Scholar 

  21. N. Juristo, O. Gomez: Replication of software engineering experiments, Lect. Notes Comput. Sci. 7007, 60–88 (2012)

    Article  Google Scholar 

  22. M. Barni, F. Perez-Gonzalez: Pushing science into signal processing, IEEE Signal Process. Mag. 120, 119–120 (2005)

    Google Scholar 

  23. C. Drummond: Replicability is not reproducibility: Nor is it good science, Proc. Eval. Methods Mach. Learn. Workshop 26th Int. Conf. Mach. Learn. (2009)

    Google Scholar 

  24. S. Hanks, M. Pollack, P. Cohen: Benchmarks, test beds, controlled experimentation, and the design of agent architectures, AI Magazine 14(4), 17–42 (1993)

    Google Scholar 

  25. International Foundation of Robotics Research: International Symposium on Experimental Robotics, http://www.ifrr.org/iser.php, last accessed August 2016

  26. F. Bonsignorio, J. Hallam, A. del Pobil, Special Interest Group on Good Experimental Methodology: GEM Guidelines, http://www.heronrobots.com/EuronGEMSig/downloads/GemSigGuidelinesBeta.pdf, last accessed August 2016

  27. F. Bonsignorio, J. Hallam, del Defining the requisites of a replicable robotics experiment, RSS2009 Workshop Good Exp. Methodol. Robot. 2009 (2009)

    Google Scholar 

  28. IEEE Robotics and Automation Magazine special issue on Replicable and Measurable Robotics Research (2015)

    Google Scholar 

  29. L. Takayama: Toward a science of robotics: Goals and standards for experimental research, Robot.: Sci. Syst. (RSS) Workshop Good Exp. Methodol. Robot., Seattle (2009)

    Google Scholar 

  30. M. Caccia, E. Saggini, M. Bibuli, G. Bruzzone, E. Zereik, E. Riccomagno: Towards good experimental methodologies for unmanned marine vehicles, Lect. Notes Comput. Sci. 8112, 365–372 (2013)

    Article  Google Scholar 

  31. F. Lier, J. Wienke, A. Nordmann, S. Wachsmuth, S. Wrede: The cognitive interaction toolkit – Improving reproducibility of robotic systems experiments, Lect. Notes Artif. Intell. 8810, 400–411 (2014)

    Google Scholar 

  32. A. Tanoto, J. Gomez, N. Mavridis, L. Hanyi, U. Rückert, S. Garrido: Teletesting: Path planning experimentation and benchmarking in the Teleworkbench, Proc. Eur. Conf. Mob. Robot. (ECMR) (2013) pp. 343–348

    Google Scholar 

  33. S. Hartmann: The world as a process: Simulations in the natural and social sciences. In: Simulation and Modeling in the Social Sciences from the Philosophy of Science Point of View, ed. by R. Hegselmann, U. Mueller, K.G. Troitzsch (Kluwer, Dordrecht 1996) pp. 77–100

    Chapter  Google Scholar 

  34. R. Frigg, S. Hartmann: Models in science. In: The Stanford Encyclopedia of Philosophy, ed. by E.N. Zalta (Stanford Univ., Stanford 2012), http://plato.stanford.edu/entries/models-science/

    Google Scholar 

  35. E. Winsberg: Science in the Age of Computer Simulations (Univ. Chicago Press, Chicago, London 2010)

    Book  Google Scholar 

  36. E. Winsberg: Computer simulations in science. In: The Stanford Encyclopedia of Philosophy, ed. by E.N. Zalta (Stanford Univ., Stanford 2015), http://plato.stanford.edu/entries/simulations-science/

    Google Scholar 

  37. D. Brugali, J. Broenink, T. Kroeger, B. MacDonald (Eds.): Simulation, Modeling, and Programming for Autonomous Robots, Lecture Notes in Artificial Intelligence, Vol. 8810 (Springer, Berlin, Heidelberg 2014)

    Google Scholar 

  38. USARSim (2010) http://usarsim.sourceforge.net, last accessed April 2015

  39. S. Carpin, T. Stoyanov, Y. Nevatia, M. Lewis, J. Wang: Quantitative assessments of USARSim accuracy, Proc. PerMIS (2006) pp. 111–118

    Google Scholar 

  40. B. Balaguer, S. Balakirsky, S. Carpin, M. Lewis, C. Scrapper: USARSim: A validated simulator for research in robotics and automation, IEEE/RSJ IROS 2008 Workshop Robot Simulators (2008), http://www.robot.uji.es/research/events/iros08/contributions/carpin.pdf, last accessed August 2016

    Google Scholar 

  41. A. Howard, N. Roy: The robotics data set repository, radish (2003), http://radish.sourceforge.net/, last accessed February 2015

  42. Rawseeds: The Rawseeds Project, http://www.rawseeds.org/home/, last accessed March 2015

  43. OpenSLAM: OpenSLAM, http://openslam.org/, last accessed April 2015

  44. A. Winfield: Robot simulators and why I will probably reject your paper, Alan Winfield’s Web Log, November 30th 2014, http://alanwinfield.blogspot.it/, last accessed August 2016

  45. J. Bown, P.S. Andrews, Y. Deeni, A. Goltsov, M. Idowu, F.A. Polack, A.T. Sampson, M. Shovman, S. Stepney: Engineering simulations for cancer systems biology, Curr. Drug Targets 13(12), 1560–1574 (2012)

    Article  Google Scholar 

  46. A. del Pobil: Research benchmarks V2, EURON European Robotics Network (2006) http://www.euron.org/miscdocs/docs/euron2/year2/dr-2-3-benchmarks.pdf, last accessed August 2016

  47. R. Dillmann: KA 1.10, Benchmarks for robotics research, EURON European Robotics Network (2004) http://www.cas.kth.se/euron/euron-deliverables/ka1-10-benchmarking.pdf, last accessed August 2016

  48. RoSta: Robot Standards and Reference Architectures, http://www.robot-standards.eu/, last accessed August 2016

  49. R. Madhavan, R. Lakaemper, T. Kalmar-Nagy: Benchmarking and standardization of intelligent robotic systems, Proc. ICAR (2009) pp. 1–7

    Google Scholar 

  50. NIST: NIST Standard Test Methods for Response Robots, http://www.nist.gov/el/isd/ms/robottestmethods.cfm, last accessed August 2016

  51. PerMIS, Performance Metrics for Intelligent Systems Workshop, http://www.nist.gov/el/isd/ks/permis.cfm, last accessed August 2016

  52. J. Baltes: A benchmark suite for mobile robots, Proc. IROS 12, 1101–1106 (2000)

    Google Scholar 

  53. C. Sprunk, J. Roewekaemper, G. Parent, L. Spinello, G.D. Tipaldi, W. Burgard, M. Jalobeanu: An experimental protocol for benchmarking robotic indoor navigation, Proc. Int. Symp. Exp. Robot. (ISER) (2014)

    Google Scholar 

  54. R. Madhavan, C. Scrapper, A. Kleiner: guest editorial: Special issue on characterizing mobile robot localization and mapping, Auton. Robot. 27(4), 309–311 (2009)

    Article  Google Scholar 

  55. B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, A. Dollar: The YCB object and model set, http://rll.eecs.berkeley.edu/ycb/, last accessed August 2016

  56. A. Shakhimardanov, E. Prassler: Comparative evaluation of robotic software integration systems: A case study, Proc. IROS (2007) pp. 3031–3037

    Google Scholar 

  57. O. Michel, F. Rohrer, Y. Bourquin: Rat’s life A cognitive robotics benchmark, Proc. Eur. Robot. Symp. (2008) pp. 223–232

    Google Scholar 

  58. B. Rohrer: Accelerating progress in artificial general intelligence: Choosing a benchmark for natural world interaction, J. Artif. Gen. Intell. 2(1), 1–28 (2011)

    Article  MathSciNet  Google Scholar 

  59. M. Zillich: My robot is smarter than your robot – On the need for a total Turing test for robots, Proc. AISB/IACAP Symp. Revisiting Turing his Test Compr., Qualia, Real World (2012) pp. 12–15

    Google Scholar 

  60. A. Torralba, A. Efros: Unbiased look at dataset bias, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) (2011) pp. 1521–1528

    Google Scholar 

  61. G. Virk, S. Cameron: ISO-IEC standardization efforts in robotics, Proc. IROS Workshop Stand. Knowl. Represent. Ontol. Robot. Autom. (2014) pp. 5–6

    Google Scholar 

  62. R. Madhavan, W. Yu, C. Schlenoff, E. Prestes, F. Amigoni: Draft standards development of two working groups, IEEE Robot. Autom. Mag. 21(3), 20–23 (2014)

    Article  Google Scholar 

  63. OMG: Robotics Domain Task Force, http://robotics.omg.org/, last accessed August 2016

  64. P. Bonasso, T. Dean: A retrospective of the AAAI robot competitions, AI Magazine 18(1), 11–23 (1997)

    Google Scholar 

  65. R. Murphy: Using robot competitions to promote intellectual development, AI Magazine 21(1), 77–90 (2000)

    Google Scholar 

  66. J. Casper, M. Micire, J. Hyams, R. Murphy: A case study of how mobile robot competitions promote future research, Lect. Notes Artif. Intell. 2377, 123–132 (2002), RoboCup 2001

    MATH  Google Scholar 

  67. M. Chew, S. Demidenko, C. Messom, G. Sen Gupta: Robotics competitions in engineering education, Proc. 4th Int. Conf. Auton. Robot. Agents (2009) pp. 624–627

    Google Scholar 

  68. J. Anderson, J. Baltes, C.T. Cheng: Robotics competitions as benchmarks for AI research, Knowl. Eng. Rev. 26(1), 11–17 (2011)

    Article  Google Scholar 

  69. T. Bräunl: Research relevance of mobile robot competitions, IEEE Robot. Autom. Mag. 6(4), 32–37 (1999)

    Article  Google Scholar 

  70. M. Anderson, O. Jenkins, S. Osentoski: Recasting robotics challenges as experiments, IEEE Robot. Autom. Mag. 18(2), 10–11 (2011), doi:10.1109/MRA.2011.941627

    Article  Google Scholar 

  71. S. Behnke: Robot competitions – Ideal benchmarks for robotics research, Proc. IROS Workshop Benchmarks Robot. Res. (2006), https://www.ais.uni-bonn.de/nimbro/papers/IROS06WS_Benchmarks_Behnke.pdf, last accessed August 2016

    Google Scholar 

  72. H. Yanco: Designing metrics for comparing the performance of robotic systems in robot competitions, Workshop Meas. Perform. Intel. Intel. Syst. (PERMIS) (2001)

    Google Scholar 

  73. E. Messina, R. Madhavan, S. Balakirsky: The role of competitions in advancing intelligent systems: A practioner’s perspective, Proc. PerMIS (2009) pp. 105–108

    Chapter  Google Scholar 

  74. J. Parker, J. Godoy, W. Groves, M. Gini: Issues with methods for scoring competitors in RoboCup rescue, AAMAS Workshop Auton. Robot. Multirobot Syst. (2014), http://www-users.cs.umn.edu/~gini/papers/Parker2014arms.pdf, last accessed August 2016

    Google Scholar 

  75. The Robocup Foundation: RoboCup, http://www.robocup.org, last accessed August 2016

  76. L. Iocchi, D. Holz, J. Ruiz-del-Solar, K. Sugiura, T. van der Zant: RoboCup@Home: Analysis and results of evolving competitions for domestic and service robots, Artif. Intell. 229, 258–281 (2015), doi:10.1016/j.artint.2015.08.002

    Article  MathSciNet  Google Scholar 

  77. DARPA: DARPA Robotics Challenge, https://en.wikipedia.org/wiki/DARPA_Robotics_Challenge, last accessed August 2016

  78. euRathlon: euRathlon-an outdoor robotics challenge for land, sea and air, http://www.eurathlon.eu, last accessed August 2016

  79. RoCKIn, RoCKIn project, http://rockinrobotchallenge.eu, last accessed August 2016

  80. J. Hernández-Orallo: AI evaluation: Past, present and future, http://arxiv.org/abs/1408.6908, last accessed August 2016

  81. M. Tedre: The Science of Computing: Shaping a Discipline (CRC, Boca Raton 2015)

    MATH  Google Scholar 

  82. R.N. Giere: Explaining Science: A Cognitive Approach (Univ. Chicago Press, Chicago 1998)

    Google Scholar 

  83. A.M. Isaac: Modeling without representation, Synthese 190(16), 3611–3623 (2013)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesco Amigoni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Amigoni, F., Schiaffonati, V. (2017). Models and Experiments in Robotics. In: Magnani, L., Bertolotti, T. (eds) Springer Handbook of Model-Based Science. Springer Handbooks. Springer, Cham. https://doi.org/10.1007/978-3-319-30526-4_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-30526-4_36

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-30525-7

  • Online ISBN: 978-3-319-30526-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics