Machine Vision and Applications

, Volume 27, Issue 1, pp 77–86 | Cite as

Addressing the non-functional requirements of computer vision systems: a case study

Original Paper

Abstract

Computer vision plays a major role in most autonomous systems and is particularly fundamental within the robotics industry, where vision data are the main input to all navigation and high-level decision making. Although there is significant research into developing and optimising algorithms for feature detection and environment reconstruction, there is a comparative lack of emphasis on how best to map these abstract concepts onto an appropriate software architecture. In this study, we distinguish between functional and non-functional requirements of a computer vision system. Using a RoboCup humanoid robot system as a case study, we propose and develop a software architecture that fulfills the latter criteria. To demonstrate the modifiability of the proposed architecture, we detail a number of examples of feature detection algorithms that were modified to capture the rapidly evolving RoboCup requirements, with emphasis on which aspects of the underlying framework required modification to support their integration. To demonstrate portability, we port our vision system (designed for an application-specific DARwIn-OP humanoid robot) to a general-purpose, Raspberry Pi computer. We evaluate the processing time on both hardware platforms for several image streams under different conditions and compare relative to a vision system optimised for functional requirements only. The architecture and implementation presented in this study provide a highly generalisable framework for computer vision system design that is of particular benefit in research and development, competition and other environments in which rapid system evolution is necessary to adapt to domain-specific requirements.

Keywords

Computer vision Software architecture Robotics 

References

  1. 1.
    Chen, S., Li, Y., Kwok, N.: Active vision in robotic systems: a survey of recent developments. Int. J. Robot. Res. 30(11), 1343–1377 (2011)CrossRefGoogle Scholar
  2. 2.
    Kapach, K., Barnea, E., Mairon, R., Edan, Y., BenShahar, O.: Computer vision for fruit harvesting robots state of the art and challenges ahead. Int. J. Comput. Vis. Robot. 3(1–2), 4–34 (2012)CrossRefGoogle Scholar
  3. 3.
    Budden, D., Fenn, S., Mendes, A., Chalup, S.: Evaluation of colour models for computer vision using cluster validation techniques. In: RoboCup 2012: Robot Soccer World Cup XVI. Lecture Notes in Artificial Intelligence, vol. 7500, pp. 261–272 (2013)Google Scholar
  4. 4.
    Sonka, M., Hlavac, V., Boyle, R.: Image Processing, Analysis, and Machine Vision, 4th edn. Cengage Learning, Kentucky (2014)Google Scholar
  5. 5.
    Wu, D., Sun, D.-W.: Colour measurements by computer vision for food quality control: a review. Trends Food Sci. Technol. 29(1), 5–20 (2013)CrossRefGoogle Scholar
  6. 6.
    Wang, X., Georganas, N.D., Petriu, E.M.: Fabric texture analysis using computer vision techniques. IEEE Trans. Instrum. Meas. 60(1), 44–56 (2011)CrossRefGoogle Scholar
  7. 7.
    Aldrich, C., Marais, C., Shean, B.J., Cilliers, J.J.: Online monitoring and control of froth flotation systems with machine vision: a review. Int. J. Miner. Process. 96(1–4), 1–13 (2010)CrossRefGoogle Scholar
  8. 8.
    Kitano, H., Asada, M., Kuniyoshi, Y., Noda, I., Osawa, E., Matsubara, H.: Robocup: a challenge problem for AI. Artif. Intell. Mag. 18(1), 73–85 (1997)Google Scholar
  9. 9.
    Hannemann, A.K., Stiddien, F., Xia, M., Krebs, O., Gerndt, R., Krupop, S., Bolze, T., Lorenz, T.: WF wolves–humanoid kid size team description for RoboCup 2014. In: RoboCup 2014 Symposium and Competitions: Team Description Papers, João Pessoa (2014)Google Scholar
  10. 10.
    Mellmann, H., Krause, T., Ritter, C. N., Kaden, S., Hübner, T., Schlotter, B., Tofangchi, S., Bielefeld, M., Berndt, A.: Berlin United—Nao Team Humboldt Team Report 2014. In: RoboCup 2014 Symposium and Competitions: Team Description Papers, João Pessoa (2014)Google Scholar
  11. 11.
    Akatsuka, C., He, Y., Li, J., Mushonga, T., Poudel, S., Zhu, J., Lee, D.: UPennalizers robocup 2014 standard platform league team description paper. In: RoboCup 2014 Symposium and Competitions: Team Description Papers, João PessoaGoogle Scholar
  12. 12.
    Akın, H.L., Aşık, O., Görer, B., Erdem, A., İrfan, B.: Cerberus’14 team report. In: RoboCup 2014 Symposium and Competitions: Team Description Papers, João Pessoa (2014)Google Scholar
  13. 13.
    Röfer, T., Laue, T.: On B-human’s code releases in the standard platform league—software architecture and impact. In: RoboCup 2013: Robot World Cup XVII. Lecture Notes in Artificial Intelligence, vol. 8371, pp. 648–655 (2014)Google Scholar
  14. 14.
    Röfer, T., Laue, T., Müller, J., Schüthe, M., Bartsch, D., Böckmann, A., Jenett, D., Koralewski, S., Maaß, F., Maier, E., Siemer, C., Tsogias, A., Vosteen, J.: B-human team report and code release 2014. In: Tech Rep. University of Bremen, Bremen (2014). https://www.b-human.de/downloads/publications/2014/CodeRelease2014. Accessed 30 Aug 2015
  15. 15.
    Zucker, M., Joo, S., Grey, M.X., Rasmussen, C., Huang, E., Stilman, M., Bobick, A.: A general-purpose system for teleoperation of the DRC-HUBO humanoid robot. J. Field Robot. 32(3), 336–351 (2015)CrossRefGoogle Scholar
  16. 16.
    Annable, B., Budden, D., Calland, S., Chalup, S., Fenn, S., Flannery, M., Fountain, J., King, R., Metcalfe, M., Mendes, A.: The NUbots team description paper 2013. In: RoboCup 2013 Symposium and Competitions: Team Description Papers, Eindhoven (2013)Google Scholar
  17. 17.
    Kulk, J., Nicklin, S., Wong, A., Bhatia, S., Fenn, S., Budden, D., Walker, J., Reitveld, J.: NUbots robocup code repository (2012). https://github.com/nubots/. Accessed 23 Oct 2015
  18. 18.
    Smith, R., Smith, G., Wardani, A.: Software reuse in robotics: enabling portability in the face of diversity. In: Proceedings of the 2004 IEEE Conference on Robotics, Automation and Mechatronics, Singapore, pp. 933–938 (2004)Google Scholar
  19. 19.
    Gordon, I.: Essential Software Architecture. Springer, Germany (2006)Google Scholar
  20. 20.
    Taylor, R.N., Medvidovic, N., Dashofy, E.M.: Software Architecture: Foundations, Theory, and Practice. Wiley Publishing, USA (2009)CrossRefGoogle Scholar
  21. 21.
    Yacoub, S.M., Ammar, H.H.: Pattern-Oriented Analysis and Design: Composing Patterns to Design Software Systems. Pearson Education, USA (2003)Google Scholar
  22. 22.
    Budden, D., Mendes, A.: Unsupervised recognition of salient colour for real-time image processing. In: RoboCup 2013: Robot Soccer World Cup XVII. Lecture Notes in Artificial Intelligence, vol. 8371, pp. 373–384 (2013)Google Scholar
  23. 23.
    Budden, D., Shannon, F., Walker, J., Mendes, A.: A novel approach to ball detection for humanoid robot soccer. In: Proceedings of the 25th Australasian Joint Conference on Artificial Intelligence. Lecture Notes in Artificial Intelligence, vol. 7691, pp. 827–838Google Scholar
  24. 24.
    Flannery, M., Fenn, S., Budden, D.: RANSAC: identification of higher-order geometric features and applications in humanoid robot soccer. In: Submitted to ACRA2014—Australasian Conference on Robotics and Automation, Melbourne (2014)Google Scholar
  25. 25.
    Nadarajah, S., Sundaraj, K.: Vision in robot soccer: a review. Artif. Intell. Rev. (2013). doi: 10.1007/s10462-013-9401-3 (in press)

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Shannon Fenn
    • 1
  • Alexandre Mendes
    • 1
  • David M. Budden
    • 2
  1. 1.Faculty of Engineering and Built Environment, School of Electrical Engineering and Computer ScienceThe University of NewcastleCallaghanAustralia
  2. 2.Systems Biology Laboratory, Melbourne School of EngineeringThe University of MelbourneParkvilleAustralia

Personalised recommendations