Advertisement

Object Segmentation for Vehicle Video and Dental CBCT by Neuromorphic Convolutional Recurrent Neural Network

  • Woo-Sup Han
  • Il Song Han
Conference paper
Part of the Studies in Computational Intelligence book series (SCI, volume 751)

Abstract

The neuromorphic visual processing inspired by the biological vision system of brain offers an alternative process into applying machine vision in various environments. With the emerging interests on transportation safety enhancement of Advanced Driver Assistance System or a driverless car, the neuromorphic convolutional recurrent neural networks was proposed and tested for the night-time vehicle or VRU detection. The effectiveness of proposed convolutional-recurrent neural networks of neuromorphic visual processing was evaluated successfully for the object detection without optimized complex template matching or prior denoising neural network. The real life road video dataset at night time demonstrated 98% of successful detection/segmentation rate with 0% False Positive. The robust performance of proposed convolutional-recurrent neural network was also applied successfully to the tooth segmentation of dental X-ray 3D CT including the gum region. The feature extraction was based on neuromorphic visual processing filters of either hand-cut filters mimicking the visual cortex experimentation or the auto-encoder filter trained by partial X-ray images. The consistent performance of either hand-cut filters or the small auto-encoder filters demonstrated the feasibility of real-time and robust neuromorphic vision implemented by either the small embedded system or the portable computer.

Keywords

Neuromorphic visual processing Visual cortex Machine vision Vehicle detection CBCT Convolution-recurrent neural networks 

Notes

Acknowledgements

The research of 3D tooth segmentation was sponsored by Vatech, Korea. We are thankful to Mr. Ik Kim for his cooperation to our research on tooth segmentation using Dental X-ray images, specifically on the guidance and discussion about developing the medical applications.

References

  1. 1.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)CrossRefGoogle Scholar
  2. 2.
    Hubel, D.H., Wiesel, T.N.: Receptive fields of single neurons in the cat’s striate cortex. J. Physiol. 148, 574–591 (1959)CrossRefGoogle Scholar
  3. 3.
    Hinton, G., Osrindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Proceedings of the Advances in Neural Information Processing Systems 25(NIPS 2012) (2012)Google Scholar
  5. 5.
    Cadieu, C., Kouh, M., Pasupathy, A., Connor, C.E., Riesenhuber, M., Poggio, T.: A model of V4 shape selectivity and invariance. J. Neurophysiol. 98, 1733–1750 (2007)CrossRefGoogle Scholar
  6. 6.
    Scarfe, W., Farman, A.: What is Con-Beam CT and how does it work? Dent. Clin. North Am (Elsevier) 52, 707–730 (2008)CrossRefGoogle Scholar
  7. 7.
    Han, W.S., Han, I.S.: Enhanced neuromorphic visual processing by segmented neuron for intelligent vehicle. In: Proceedings of the SAI, pp. 307–311, July 2016Google Scholar
  8. 8.
    Han, W.S., Han, I.S.: All Weather Human Detection Using Neuromorphic Visual Processing. Studies in Computational Intelligence, vol. 542, pp. 25–44. Springer (2014)Google Scholar
  9. 9.
    Fuster, J.: The Prefrontal cortex makes the brain a preadaptive system. Proc. IEEE 102(4), 417–426 (2014)CrossRefGoogle Scholar
  10. 10.
    Paisitkriangkrai, S., Shen, C., Hengel, A.: Pedestrian detection with spatially pooled features and structured ensemble learning. IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1243–1257 (2016)CrossRefGoogle Scholar
  11. 11.
    Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE Conference Computer Vision, pp. 141–148 (2015)Google Scholar
  12. 12.
    Li, X., Flohr, F., Yang, Y., Xiong, H., Braun, M., Pan, S., Li, K., Gavrila, D.: A new benchmark for vision-based cyclist detection. In: Proceedings of the IEEE Symposium Intelligent Vehicle, pp. 1028–1033 (2016)Google Scholar
  13. 13.
    Sifibe, D., Meriaudeau, F.: Visual saliency detection in colour images based on density estimation. Electron. Lett. 53(1), 24–25 (2017)CrossRefGoogle Scholar
  14. 14.
    Sermanet, P., Kavukcupglu, K., Chintala, S., Lecun, Y.: Pedestrian detection with unsupervised multi-stage feature learning. In: Proceedings of the IEEE Conference Computer Vision and Pattern Recognition, pp. 3626–3633 (2013)Google Scholar
  15. 15.
    Faraber, C., Poulet, C., Han, J., Lecun, Y.: CNP: An FPGA-based processor for convolutional networks. In: Proceedings of the IEEE Conference Field Programmable Logic and Applications, pp. 32–37 (2009)Google Scholar
  16. 16.
    Hubel, D.: A big step along the visual pathway. Nature 380, 197–198 (1996)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.ODIGA LtdLondonUK
  2. 2.Graduate School for Green TransportationKorea Advanced Institute of Science and TechnologyDaejonKorea

Personalised recommendations