Skip to main content

Google Colaboratory for Quantifying Stomata in Images

  • Conference paper
  • First Online:
Computer Aided Systems Theory – EUROCAST 2019 (EUROCAST 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12014))

Included in the following conference series:

Abstract

Stomata are pores in the epidermal tissue of plants formed by specialized cells called occlusive cells or guard cells. Analyzing the number and behavior of stomata is a task carried out by studying microscopic images, and that can serve, among other things, to better manage crops in agriculture. However, quantifying the number of stomata in an image is an expensive process since a stomata image might contain dozens of stomata. Therefore, it is interesting to automate such a detection process. This problem can be framed in the context of object detection, a task widely studied in computer vision. Currently, the best approaches to tackle object detection problems are based on deep learning techniques. Although these techniques are very successful, they might be difficult to use. In this work, we face this problem, specifically for the detection of stomata, by building a Jupyter notebook in Google Colaboratory that allows biologists to automatically detect stomata in their images.

Partially supported by Ministerio de Industria, Economía y Competitividad, project MTM2017-88804-P; and Agencia de Desarrollo Económico de La Rioja, project 2017-I-IDD-00018. We also acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alexey, A.B.: YOLO mark (2018). https://github.com/AlexeyAB/Yolo mark

  2. Bloice, M.D., Stocker, C., Holzinger, A.: Augmentor: an image augmentation library for machine learning. J. Open Source Softw. 2, 432 (2017)

    Article  Google Scholar 

  3. Buttery, B.R., Tan, C.S., Buzzell, R.I., Gaynor, J.D., MacTavish, D.C.: Stomatal numbers of soybean and response to water stress. Plant Soil 149(2), 283–288 (1993). https://doi.org/10.1007/BF00016619

    Article  Google Scholar 

  4. Casado-García, A., Heras, J.: Guiding the creation of deep learning-based object detectorss. In: Proceedings of the XVIII Conferencia de la Asociacion Española para la Inteligencia Artificial (CAEPIA 2018), session DEEPL 2018 (2018)

    Google Scholar 

  5. Colaboratory Team: Google Colaboratory (2017). https://colab.research.google.com

  6. Everingham, M., et al.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vis. 111(1), 98–136 (2015)

    Article  Google Scholar 

  7. Girshick, R.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448 (2015)

    Google Scholar 

  8. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

    Google Scholar 

  9. Heras, J., et al.: CLoDSA: an open-source image augmentation library for object classification, localization, detection and semantic segmentation (2018). https://github.com/joheras/CLoDSA

  10. Hetherington, A.M., Woodward, F.I.: The role of stomata in sensing and driving environmental change. Nature 424(6951), 901–908 (2003)

    Article  Google Scholar 

  11. Hughes, J., et al.: Reducing stomatal density in barley improves drought tolerance without impacting on yield. Plant Physiol. 174(2), 776–787 (2017)

    Article  Google Scholar 

  12. Jung, A.: Imgaug: a library for image augmentation in machine learning experiments (2017). https://github.com/aleju/imgaug

  13. Kluyver, T., et al.: Jupyter notebooks – a publishing format for reproducible computational workflows. In: Proceedings of the 20th International Conference on Electronic Publishing, pp. 87–90. IOS Press (2016)

    Google Scholar 

  14. Lin, T., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2980–2988 (2017). abs/1708.02002

    Google Scholar 

  15. Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  16. Redmon, J., Farhadi, A.: YOLOv3: An Incremental Improvement. CoRR abs/1804.02767 (2018). http://arxiv.org/abs/1804.02767

  17. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28, pp. 91–99 (2015)

    Google Scholar 

  18. Rosebrock, A.: Deep Learning for Computer Vision with Python. PyImageSearch (2018). https://www.pyimagesearch.com/

  19. Sarle, W.S.: Stopped training and other remedies for overfitting. In: Proceedings of the 27th Symposium on the Interface of Computing Science and Statistics, pp. 352–360 (1995)

    Google Scholar 

  20. Simard, P., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of the 12th International Conference on Document Analysis and Recognition (ICDAR 2003), vol. 2, pp. 958–964 (2003)

    Google Scholar 

  21. Simard, P., Victorri, B., LeCun, Y., Denker, J.S.: Tangent prop - a formalism for specifying selected invariances in an adaptive network. In: Proceedings of the 4th International Conference on Neural Information Processing Systems (NIPS 1991). Advances in Neural Information Processing Systems, vol. 4, pp. 895–903 (1992)

    Google Scholar 

  22. Tzutalin, D.: LabelImg (2015). https://github.com/tzutalin/labelImg

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ángela Casado-García .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Casado-García, Á., Heras, J., Sanz-Sáez, A. (2020). Google Colaboratory for Quantifying Stomata in Images. In: Moreno-Díaz, R., Pichler, F., Quesada-Arencibia, A. (eds) Computer Aided Systems Theory – EUROCAST 2019. EUROCAST 2019. Lecture Notes in Computer Science(), vol 12014. Springer, Cham. https://doi.org/10.1007/978-3-030-45096-0_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-45096-0_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-45095-3

  • Online ISBN: 978-3-030-45096-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics