Skip to main content

O-LGMD: An Opponent Colour LGMD-Based Model for Collision Detection with Thermal Images at Night

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2022 (ICANN 2022)

Abstract

It is an enormous challenge for intelligent robots or vehicles to detect and avoid collisions at night because of poor lighting conditions. Thermal cameras capture night scenes with temperature maps, often showing different pseudo-colour modes to enhance the visual effects for the human eyes. Since the features of approaching objects could have been well enhanced in the pseudo-colour outputs of a thermal camera, it is likely that colour cues could help the Lobula Giant Motion Detector (LGMD) to pick up the collision cues effectively. However, there is no investigation published on this aspect and it is not clear whether LGMD-like neural networks can take pseudo-colour information as input for collision detection in extreme dim conditions. In this study, we investigate a few thermal pseudo-colour modes and propose to extract colour cues with a triple-channel LGMD-based neural network to directly process the pseudo-colour images. The proposed model consists of three sub-networks—each dealing with one specific opponent colour channel, i.e. black-white, red-green, or yellow-blue. A collision alarm is triggered if any channel’s output exceeds its threshold for a few successive frames. Our experiments demonstrate that the proposed bio-inspired collision detection system works well in quickly detecting colliding objects in direct collision course in extremely low lighting conditions. The proposed method showed its potential to be part of sensor systems for future robots or vehicles driving at night or in other extreme lighting conditions—to help avoiding fatal collisions.

Supported by the GDUPT-UoL joint research lab, which is also named Computational Intelligence Laboratory (CIL) at the University of Lincoln.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bratkova, M., Boulos, S., Shirley, P.: oRGB: a practical opponent color space for computer graphics. IEEE Comput. Graph. Appl. 29(1) (2008)

    Google Scholar 

  2. BT, R.I.R., et al.: Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios. International Radio Consultative Committee International Telecommunication Union, Switzerland, CCIR Rep. (2011)

    Google Scholar 

  3. Buchsbaum, G., Gottschalk, A.: Trichromacy, opponent colours coding and optimum colour information transmission in the retina. Proc. Roy. Soc. Lond. Ser. B Biol. Sci. 220(1218) (1983)

    Google Scholar 

  4. Dacey, D.M.: Parallel pathways for spectral coding in primate retina. Annu. Rev. Neurosci. 23(1) (2000)

    Google Scholar 

  5. Dong, L., Zhang, W., Xu, W.: Underwater image enhancement via integrated RGB and LAB color models. Signal Process. Image Commun. (2022)

    Google Scholar 

  6. Fu, Q., et al.: A visual neural network for robust collision perception in vehicle driving scenarios. In: MacIntyre, J., Maglogiannis, I., Iliadis, L., Pimenidis, E. (eds.) AIAI 2019. IAICT, vol. 559, pp. 67–79. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19823-7_5

  7. Fu, Q., Hu, C., Liu, P., Yue, S.: Synthetic neural vision system design for motion pattern recognition in dynamic robot scenes. arXiv preprint arXiv:1904.07180 (2019)

  8. Fu, Q., Yue, S.: Modelling LGMD2 visual neuron system. In: International Workshop on Machine Learning for Signal Processing (MLSP). IEEE (2015)

    Google Scholar 

  9. Grünert, U., Martin, P.R.: Cell types and cell circuits in human and non-human primate retina. Prog. Retinal Eye Res. 78 (2020)

    Google Scholar 

  10. Hashemzadeh, M., Zademehdi, A.: Fire detection for video surveillance applications using ICA K-medoids-based color model and efficient spatio-temporal visual features. Exp. Syst. Appl. 130 (2019)

    Google Scholar 

  11. Helmholtz, H.V.: The Young-Helmholtz theory of color vision, 1860 (1948)

    Google Scholar 

  12. Hering, E.: Outlines of a theory of the light sense (1964)

    Google Scholar 

  13. Hu, C., Arvin, F., Xiong, C., Yue, S.: Bio-inspired embedded vision system for autonomous micro-robots: the LGMD case. IEEE Trans. Cognit. Develop. Syst. 9(3) (2016)

    Google Scholar 

  14. Hu, C., Xiong, C., Peng, J., Yue, S.: Coping with multiple visual motion cues under extremely constrained computation power of micro autonomous robots. IEEE Access 8 (2020)

    Google Scholar 

  15. Ibraheem, N.A., Hasan, M.M., Khan, R.Z., Mishra, P.K.: Understanding color models: a review. ARPN J. Sci. Technol. 2(3) (2012)

    Google Scholar 

  16. Jayachandran, D., et al.: A low-power biomimetic collision detector based on an in-memory molybdenum disulfide photodetector. Nat. Electron. 3(10) (2020)

    Google Scholar 

  17. Kasparson, A.A., Badridze, J., Maximov, V.V.: Colour cues proved to be more informative for dogs than brightness. Proc. Roy. Soc. B Biol. Sci. 280(1766) (2013)

    Google Scholar 

  18. van der Kooi, C.J., Stavenga, D.G., Arikawa, K., Belušič, G., Kelber, A.: Evolution of insect color vision: from spectral sensitivity to visual ecology. Annu. Rev. Entomol.66 (2021)

    Google Scholar 

  19. Lakowski, R.: Theory and practice of colour vision testing: a review part 1. Occupat. Environ. Med. 26(3) (1969)

    Google Scholar 

  20. Lei, F., Peng, Z., Liu, M., Peng, J., Cutsuridis, V., Yue, S.: A robust visual system for looming cue detection against translating motion. IEEE Trans. Neural Netw. Learn. Syst. (2022)

    Google Scholar 

  21. Lyapidevskii, V.: Experimental verification of the opponent theory of human color vision. Biophysics 51(2) (2006)

    Google Scholar 

  22. Rind, F.C., Bramwell, D.: Neural network based on the input organization of an identified neuron signaling impending collision. J. Neurophysiol. 75(3) (1996)

    Google Scholar 

  23. Salt, L., Indiveri, G., Sandamirskaya, Y.: Obstacle avoidance with LGMD neuron: towards a neuromorphic UAV implementation. In: 2017 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE (2017)

    Google Scholar 

  24. Smith, A.C., Buchanan-Smith, H.M., Surridge, A.K., Osorio, D., Mundy, N.I.: The effect of colour vision status on the detection and selection of fruits by tamarins (saguinus spp.). J. Exp. Biol. 206(18) (2003)

    Google Scholar 

  25. Song, B.M., Lee, C.H.: Toward a mechanistic understanding of color vision in insects. Front. Neural Circuits 12 (2018)

    Google Scholar 

  26. Yan, F., Li, N., Hirota, K.: QHSL: A quantum hue, saturation, and lightness color model. Inf. Sci. 577 (2021)

    Google Scholar 

  27. Yue, S., Rind, F.C.: A collision detection system for a mobile robot inspired by the locust visual system. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation. IEEE (2005)

    Google Scholar 

  28. Yue, S., Rind, F.C.: Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement. IEEE Trans. Neural Netw. 17(3) (2006)

    Google Scholar 

  29. Yue, S., Rind, F.C., Keil, M.S., Cuadri, J., Stafford, R.: A bio-inspired visual collision detection mechanism for cars: optimisation of a model of a locust neuron to a novel environment. Neurocomputing 69, 13–15 (2006)

    Article  Google Scholar 

  30. Zhang, G., Zhang, C., Yue, S.: LGMD and DSNs neural networks integration for collision predication. In: International Joint Conference on Neural Networks (IJCNN). IEEE (2016)

    Google Scholar 

  31. Zhao, J., Wang, H., Bellotto, N., Hu, C., Peng, J., Yue, S.: Enhancing LGMD’s looming selectivity for UAV with spatial-temporal distributed presynaptic connections. IEEE Trans. Neural Netw. Learn. Syst. (2021)

    Google Scholar 

Download references

Acknowledgement

This research was supported by the EU HORIZON 2020 project ULTRACEPT (778062) and the National Natural Science Foundation of China project (62073091).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shigang Yue .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y. et al. (2022). O-LGMD: An Opponent Colour LGMD-Based Model for Collision Detection with Thermal Images at Night. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13531. Springer, Cham. https://doi.org/10.1007/978-3-031-15934-3_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15934-3_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15933-6

  • Online ISBN: 978-3-031-15934-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics