Advertisement

DetectionEvaluationJ: A Tool to Evaluate Object Detection Algorithms

  • C. Domínguez
  • M. García
  • J. HerasEmail author
  • A. Inés
  • E. Mata
  • V. Pascual
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10672)

Abstract

Object detection is an area of computer vision with applications in several contexts such as biomedicine and security; and it is currently growing thanks to the availability of datasets of images, and the use of deep learning techniques. In order to apply object detection algorithms is instrumental to know the quality of the regions detected by them; however, such an evaluation is usually performed using ad-hoc tools for each concrete problem; and, up to the best of our knowledge, it does not exist a simple and generic tool to conduct this task. In this paper, we present DetectionEvaluationJ an open-source tool that has been designed to evaluate the goodness of object detection algorithms in any context and using several metrics. This tool is independent from the programming language employed to implement the detection algorithms and also from the concrete problem where such algorithms are applied.

References

  1. 1.
    Alonso, A., et al.: AntibiogramJ: a tool for analysing images from disk diffusion tests. Comput. Methods Program. Biomed. 143, 159–169 (2017)CrossRefGoogle Scholar
  2. 2.
    Codella, N., et al.: Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 61(4), 5:1–5:15 (2017)CrossRefGoogle Scholar
  3. 3.
    Everingham, M., et al.: The PASCAL visual object classes challenge 2012 (VOC2012) results (2012). http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
  4. 4.
    Evjen, B., et al.: Professional XML. Wiley Publishing Inc., Hoboken (2007)Google Scholar
  5. 5.
    Ghasemian, F., et al.: An efficient method for automatic morphological abnormality detection from human sperm images. Comput. Methods Program. Biomed. 122(3), 409–420 (2015)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Glas, A.S., et al.: The diagnostic odds ratio: a single indicator of test performance. J. Clin. Epidemiol. 56, 1129–1135 (2003)CrossRefGoogle Scholar
  7. 7.
    Gutman, D., et al.: Skin lesion analysis toward melanoma detection: a challenge at the international symposium on biomedical imaging (ISBI) 2016. arXiv preprint arXiv:1605.01397 (2016)
  8. 8.
    Heras, J., et al.: GelJ - a tool for analyzing DNA fingerprint gel images. BMC Bioinf. 16, 270 (2015)CrossRefGoogle Scholar
  9. 9.
    Heras, J., et al.: Surveying and benchmarking techniques to analyse DNA gel fingerprint images. Brief. Bioinf. 17(6), 912–925 (2015)Google Scholar
  10. 10.
    Huang, S.C., Chen, B.H.: Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems. IEEE Trans. Neural Netw. Learn. Syst. 24(12), 1920–1931 (2013)CrossRefGoogle Scholar
  11. 11.
    Kaehler, A., Bradski, G.: Learning OpenCV 3. O’Reilly Media, Sebastopol (2015)Google Scholar
  12. 12.
    Lasko, T.A., et al.: The use of receiver operating characteristic curves in biomedical informatics. J. Biomed. Inf. 38(5), 404–415 (2005)CrossRefGoogle Scholar
  13. 13.
    Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48 Google Scholar
  14. 14.
    Mata, G., et al.: SynapCountJ: a validated tool for analyzing synaptic densities in neurons. Commun. Comput. Inf. Sci. 690, 41–55 (2017)Google Scholar
  15. 15.
    MathWorks: Matlab version 9.0.0.341360 (R2016a). The MathWorks Inc., Natick, Massachusetts (2016)Google Scholar
  16. 16.
    Orlando, J.I., et al.: A discriminatively trained fully connected conditional random field model for blood blessed segmentation in fundus images. IEEE Trans. Biomed. Eng. 64(1), 16–27 (2015)CrossRefGoogle Scholar
  17. 17.
    Powers, D.M.W.: Evaluation: from precision, recall and F-factor to ROC, informedness, markedness and correlation. Int. J. Mach. Learn. Technol. 2(1), 37–63 (2011)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. arXiv preprint arXiv:1612.08242 (2016)
  19. 19.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Schneider, C.A., Rasband, W.S., Eliceiri, K.W.: NIH Image to ImageJ: 25 years of image analysis. Nature Methods 9(7), 671–675 (2012)CrossRefGoogle Scholar
  21. 21.
    Shaoqing, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)CrossRefGoogle Scholar
  22. 22.
    Silva, J.S., et al.: Algorithm versus physicians variability evaluation in the cardiac chambers extraction. IEEE Trans. Inf. Technol. Biomed. 16(5), 835–841 (2012)CrossRefGoogle Scholar
  23. 23.
    Wolf, C., Jolion, J.M.: Object count/area graphs for the evaluation of object detection and segmentation algorithms. Int. J. Doc. Anal. Recogn. 8, 280–296 (2006)CrossRefGoogle Scholar
  24. 24.
    Zalama, E., et al.: Road crack detection using visual features extracted by gabor filters. Comput. Aided Civil Infrastruct. Eng. 29, 342–358 (2014)CrossRefGoogle Scholar
  25. 25.
    Zhai, M., et al.: Object detection in surveillance video from dense trajectories. In: 14th IAPR International Conference on Machine Vision Applications. IEEE (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • C. Domínguez
    • 1
  • M. García
    • 1
  • J. Heras
    • 1
    Email author
  • A. Inés
    • 1
  • E. Mata
    • 1
  • V. Pascual
    • 1
  1. 1.Department of Mathematics and Computer ScienceUniversity of La RiojaLa RiojaSpain

Personalised recommendations