Abstract
This chapter surveys the practices that are being employed in experimentally assessing the special class of computational models embedded in robots. The assessment of these models is particularly challenging mainly due to the difficulty of accurately estimating and modeling the interactions between the robots and their environments, especially in the case of autonomous robots, which make decisions without continuous human supervision. The field of autonomous robotics has recognized this difficulty and launched a number of initiatives to deal with it. This chapter, after a conceptual premise and a broad introduction to the experimental issues of robotics, critically reviews these initiatives that range from taking inspiration from traditional experimental practices, to simulations, benchmarking, standards, and competitions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Abbreviations
- AAMAS:
-
autonomous agents and multiagent systems
- AI:
-
artificial intelligence
- GEM:
-
good experimental methodology
References
M. Scheutz: Computation, philosophical issues about. In: The Encyclopedia of Cognitive Science, ed. by L. Nadel (Wiley, Hoboken 2002)
P. Vermaas, P. Kroes, I. van de Poel, M. Franssen, W. Houkes: A philosophy of technology. In: From Technical Artefacts to Sociotechnical Systems, (Morgan Claypool, San Rafael 2011)
S.O. Hansson: Experiments before science? – What science learned from technological experiments. In: The Role of Technology in Science. Philosophical Perspectives, ed. by S.O. Hansson (Springer, Dordrecht 2015)
F. Amigoni, M. Reggiani, V. Schiaffonati: An insightful comparison between experiments in mobile robotics and in science, Auton. Robot. 27(4), 313–325 (2009)
F. Amigoni, V. Schiaffonati, M. Verdicchio: Good experimental methodologies for autonomous robotics: From theory to practice. In: Methods and Experimental Techniques in Computer Engineering, Springer Briefs in Applied Sciences and Technology, ed. by F. Amigoni, V. Schiaffonati (Springer, Cham 2014) pp. 37–53
A. Newell, H. Simon: Computer science as empirical inquiry: Symbols and search, Commun. ACM 19(3), 113–126 (1976)
J.A. Feldman, W.R. Sutherland: Rejuvenating experimental computer science, Commun. ACM 22(9), 497–502 (1979)
P.J. Denning: What is experimental computer science?, Commun. ACM 23(10), 543–544 (1980)
W. Tichy: Should computer scientists experiment more?, Computer 31(5), 32–40 (1998)
P. Freeman: Back to experimentation, Commun. ACM 51(1), 21–22 (2008)
P. Denning, P. Freeman: Computing’s paradigm, Commun. ACM 52(12), 28–30 (2005)
C. Morrison, R. Snodgrass: Computer science can use more science, Commun. ACM 54(6), 38–43 (2011)
P.J. Denning: Is computer science science?, Commun. ACM 48(4), 27–31 (2005)
D.G. Feitelson: Experimental computer science: The need for a cultural change (2006), manuscript, http://www.cs.huji.ac.il/~feit/papers/exp05.pdf, last accessed August 2016
M.V. Zelkowitz, D.R. Wallace: Experimental validation in software engineering, Inf. Softw. Technol. 39(11), 735–743 (1997)
M.V. Zelkowitz, D.R. Wallace: Experimental models for validating technology, Computer 31(5), 23–31 (1998)
W. Harrison, V.R. Basili: Editorial, Empir. Softw. Eng. 1(1), 5–10 (1996)
M. Barni, F. Perez-Gonzalez, P. Comesana, G. Bartoli: Putting reproducible signal processing into practice: A case study in watermarking, Proc. IEEE ICASSP 4, 1261–1264 (2007)
P. Vandewalle, J. Kovacevic, M. Vetterli: Reproducible research in signal processing, IEEE Signal Process. Mag. 26(3), 37–47 (2009)
B. Mayer, M. Nordio: Empirical Software Engineering and Verification, Programming and Software Engineering, Vol. 7007 (Springer, Berlin, Heidelberg 2010)
N. Juristo, O. Gomez: Replication of software engineering experiments, Lect. Notes Comput. Sci. 7007, 60–88 (2012)
M. Barni, F. Perez-Gonzalez: Pushing science into signal processing, IEEE Signal Process. Mag. 120, 119–120 (2005)
C. Drummond: Replicability is not reproducibility: Nor is it good science, Proc. Eval. Methods Mach. Learn. Workshop 26th Int. Conf. Mach. Learn. (2009)
S. Hanks, M. Pollack, P. Cohen: Benchmarks, test beds, controlled experimentation, and the design of agent architectures, AI Magazine 14(4), 17–42 (1993)
International Foundation of Robotics Research: International Symposium on Experimental Robotics, http://www.ifrr.org/iser.php, last accessed August 2016
F. Bonsignorio, J. Hallam, A. del Pobil, Special Interest Group on Good Experimental Methodology: GEM Guidelines, http://www.heronrobots.com/EuronGEMSig/downloads/GemSigGuidelinesBeta.pdf, last accessed August 2016
F. Bonsignorio, J. Hallam, del Defining the requisites of a replicable robotics experiment, RSS2009 Workshop Good Exp. Methodol. Robot. 2009 (2009)
IEEE Robotics and Automation Magazine special issue on Replicable and Measurable Robotics Research (2015)
L. Takayama: Toward a science of robotics: Goals and standards for experimental research, Robot.: Sci. Syst. (RSS) Workshop Good Exp. Methodol. Robot., Seattle (2009)
M. Caccia, E. Saggini, M. Bibuli, G. Bruzzone, E. Zereik, E. Riccomagno: Towards good experimental methodologies for unmanned marine vehicles, Lect. Notes Comput. Sci. 8112, 365–372 (2013)
F. Lier, J. Wienke, A. Nordmann, S. Wachsmuth, S. Wrede: The cognitive interaction toolkit – Improving reproducibility of robotic systems experiments, Lect. Notes Artif. Intell. 8810, 400–411 (2014)
A. Tanoto, J. Gomez, N. Mavridis, L. Hanyi, U. Rückert, S. Garrido: Teletesting: Path planning experimentation and benchmarking in the Teleworkbench, Proc. Eur. Conf. Mob. Robot. (ECMR) (2013) pp. 343–348
S. Hartmann: The world as a process: Simulations in the natural and social sciences. In: Simulation and Modeling in the Social Sciences from the Philosophy of Science Point of View, ed. by R. Hegselmann, U. Mueller, K.G. Troitzsch (Kluwer, Dordrecht 1996) pp. 77–100
R. Frigg, S. Hartmann: Models in science. In: The Stanford Encyclopedia of Philosophy, ed. by E.N. Zalta (Stanford Univ., Stanford 2012), http://plato.stanford.edu/entries/models-science/
E. Winsberg: Science in the Age of Computer Simulations (Univ. Chicago Press, Chicago, London 2010)
E. Winsberg: Computer simulations in science. In: The Stanford Encyclopedia of Philosophy, ed. by E.N. Zalta (Stanford Univ., Stanford 2015), http://plato.stanford.edu/entries/simulations-science/
D. Brugali, J. Broenink, T. Kroeger, B. MacDonald (Eds.): Simulation, Modeling, and Programming for Autonomous Robots, Lecture Notes in Artificial Intelligence, Vol. 8810 (Springer, Berlin, Heidelberg 2014)
USARSim (2010) http://usarsim.sourceforge.net, last accessed April 2015
S. Carpin, T. Stoyanov, Y. Nevatia, M. Lewis, J. Wang: Quantitative assessments of USARSim accuracy, Proc. PerMIS (2006) pp. 111–118
B. Balaguer, S. Balakirsky, S. Carpin, M. Lewis, C. Scrapper: USARSim: A validated simulator for research in robotics and automation, IEEE/RSJ IROS 2008 Workshop Robot Simulators (2008), http://www.robot.uji.es/research/events/iros08/contributions/carpin.pdf, last accessed August 2016
A. Howard, N. Roy: The robotics data set repository, radish (2003), http://radish.sourceforge.net/, last accessed February 2015
Rawseeds: The Rawseeds Project, http://www.rawseeds.org/home/, last accessed March 2015
OpenSLAM: OpenSLAM, http://openslam.org/, last accessed April 2015
A. Winfield: Robot simulators and why I will probably reject your paper, Alan Winfield’s Web Log, November 30th 2014, http://alanwinfield.blogspot.it/, last accessed August 2016
J. Bown, P.S. Andrews, Y. Deeni, A. Goltsov, M. Idowu, F.A. Polack, A.T. Sampson, M. Shovman, S. Stepney: Engineering simulations for cancer systems biology, Curr. Drug Targets 13(12), 1560–1574 (2012)
A. del Pobil: Research benchmarks V2, EURON European Robotics Network (2006) http://www.euron.org/miscdocs/docs/euron2/year2/dr-2-3-benchmarks.pdf, last accessed August 2016
R. Dillmann: KA 1.10, Benchmarks for robotics research, EURON European Robotics Network (2004) http://www.cas.kth.se/euron/euron-deliverables/ka1-10-benchmarking.pdf, last accessed August 2016
RoSta: Robot Standards and Reference Architectures, http://www.robot-standards.eu/, last accessed August 2016
R. Madhavan, R. Lakaemper, T. Kalmar-Nagy: Benchmarking and standardization of intelligent robotic systems, Proc. ICAR (2009) pp. 1–7
NIST: NIST Standard Test Methods for Response Robots, http://www.nist.gov/el/isd/ms/robottestmethods.cfm, last accessed August 2016
PerMIS, Performance Metrics for Intelligent Systems Workshop, http://www.nist.gov/el/isd/ks/permis.cfm, last accessed August 2016
J. Baltes: A benchmark suite for mobile robots, Proc. IROS 12, 1101–1106 (2000)
C. Sprunk, J. Roewekaemper, G. Parent, L. Spinello, G.D. Tipaldi, W. Burgard, M. Jalobeanu: An experimental protocol for benchmarking robotic indoor navigation, Proc. Int. Symp. Exp. Robot. (ISER) (2014)
R. Madhavan, C. Scrapper, A. Kleiner: guest editorial: Special issue on characterizing mobile robot localization and mapping, Auton. Robot. 27(4), 309–311 (2009)
B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, A. Dollar: The YCB object and model set, http://rll.eecs.berkeley.edu/ycb/, last accessed August 2016
A. Shakhimardanov, E. Prassler: Comparative evaluation of robotic software integration systems: A case study, Proc. IROS (2007) pp. 3031–3037
O. Michel, F. Rohrer, Y. Bourquin: Rat’s life A cognitive robotics benchmark, Proc. Eur. Robot. Symp. (2008) pp. 223–232
B. Rohrer: Accelerating progress in artificial general intelligence: Choosing a benchmark for natural world interaction, J. Artif. Gen. Intell. 2(1), 1–28 (2011)
M. Zillich: My robot is smarter than your robot – On the need for a total Turing test for robots, Proc. AISB/IACAP Symp. Revisiting Turing his Test Compr., Qualia, Real World (2012) pp. 12–15
A. Torralba, A. Efros: Unbiased look at dataset bias, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) (2011) pp. 1521–1528
G. Virk, S. Cameron: ISO-IEC standardization efforts in robotics, Proc. IROS Workshop Stand. Knowl. Represent. Ontol. Robot. Autom. (2014) pp. 5–6
R. Madhavan, W. Yu, C. Schlenoff, E. Prestes, F. Amigoni: Draft standards development of two working groups, IEEE Robot. Autom. Mag. 21(3), 20–23 (2014)
OMG: Robotics Domain Task Force, http://robotics.omg.org/, last accessed August 2016
P. Bonasso, T. Dean: A retrospective of the AAAI robot competitions, AI Magazine 18(1), 11–23 (1997)
R. Murphy: Using robot competitions to promote intellectual development, AI Magazine 21(1), 77–90 (2000)
J. Casper, M. Micire, J. Hyams, R. Murphy: A case study of how mobile robot competitions promote future research, Lect. Notes Artif. Intell. 2377, 123–132 (2002), RoboCup 2001
M. Chew, S. Demidenko, C. Messom, G. Sen Gupta: Robotics competitions in engineering education, Proc. 4th Int. Conf. Auton. Robot. Agents (2009) pp. 624–627
J. Anderson, J. Baltes, C.T. Cheng: Robotics competitions as benchmarks for AI research, Knowl. Eng. Rev. 26(1), 11–17 (2011)
T. Bräunl: Research relevance of mobile robot competitions, IEEE Robot. Autom. Mag. 6(4), 32–37 (1999)
M. Anderson, O. Jenkins, S. Osentoski: Recasting robotics challenges as experiments, IEEE Robot. Autom. Mag. 18(2), 10–11 (2011), doi:10.1109/MRA.2011.941627
S. Behnke: Robot competitions – Ideal benchmarks for robotics research, Proc. IROS Workshop Benchmarks Robot. Res. (2006), https://www.ais.uni-bonn.de/nimbro/papers/IROS06WS_Benchmarks_Behnke.pdf, last accessed August 2016
H. Yanco: Designing metrics for comparing the performance of robotic systems in robot competitions, Workshop Meas. Perform. Intel. Intel. Syst. (PERMIS) (2001)
E. Messina, R. Madhavan, S. Balakirsky: The role of competitions in advancing intelligent systems: A practioner’s perspective, Proc. PerMIS (2009) pp. 105–108
J. Parker, J. Godoy, W. Groves, M. Gini: Issues with methods for scoring competitors in RoboCup rescue, AAMAS Workshop Auton. Robot. Multirobot Syst. (2014), http://www-users.cs.umn.edu/~gini/papers/Parker2014arms.pdf, last accessed August 2016
The Robocup Foundation: RoboCup, http://www.robocup.org, last accessed August 2016
L. Iocchi, D. Holz, J. Ruiz-del-Solar, K. Sugiura, T. van der Zant: RoboCup@Home: Analysis and results of evolving competitions for domestic and service robots, Artif. Intell. 229, 258–281 (2015), doi:10.1016/j.artint.2015.08.002
DARPA: DARPA Robotics Challenge, https://en.wikipedia.org/wiki/DARPA_Robotics_Challenge, last accessed August 2016
euRathlon: euRathlon-an outdoor robotics challenge for land, sea and air, http://www.eurathlon.eu, last accessed August 2016
RoCKIn, RoCKIn project, http://rockinrobotchallenge.eu, last accessed August 2016
J. Hernández-Orallo: AI evaluation: Past, present and future, http://arxiv.org/abs/1408.6908, last accessed August 2016
M. Tedre: The Science of Computing: Shaping a Discipline (CRC, Boca Raton 2015)
R.N. Giere: Explaining Science: A Cognitive Approach (Univ. Chicago Press, Chicago 1998)
A.M. Isaac: Modeling without representation, Synthese 190(16), 3611–3623 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Amigoni, F., Schiaffonati, V. (2017). Models and Experiments in Robotics. In: Magnani, L., Bertolotti, T. (eds) Springer Handbook of Model-Based Science. Springer Handbooks. Springer, Cham. https://doi.org/10.1007/978-3-319-30526-4_36
Download citation
DOI: https://doi.org/10.1007/978-3-319-30526-4_36
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-30525-7
Online ISBN: 978-3-319-30526-4
eBook Packages: EngineeringEngineering (R0)