Skip to main content

Electronics Engineering Perspectives on Computer Vision Applications: An Overview of Techniques, Sub-areas, Advancements and Future Challenges

  • Chapter
  • First Online:
Cutting Edge Applications of Computational Intelligence Tools and Techniques

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1118))

  • 164 Accesses

Abstract

This chapter provides a strategic overview of applications in the computer vision domain. We initially introduce the etymology of computer vision, main tasks, key techniques, and algorithms. Traditional feature extraction methods and deep learning techniques, including prominent algorithms like Region-Based Convolutional Neural Network (R-CNN) and You Only Look Once (YOLO), are explored. We discuss important sub-areas such as image classification, object detection, and image semantic segmentation. The versatility of computer vision is showcased, particularly in autonomous vehicles, healthcare, and surveillance. Furthermore, we delve into the challenges and potential of computer vision, highlighting the necessity for advanced algorithmic methodologies, efficient hardware, robust privacy protections, and conscientious ethical considerations. We also explore upcoming trends, including cross-modal learning, sophisticated ‘vision GPT’ models, and unified models that share architecture and parameters across different tasks. These future directions indicate a transformative impact across various sectors, encompassing autonomous driving, healthcare imaging, and e-commerce. Additionally, we outline the future challenges and trends in the field, underscoring the significance of continuous research and development to address issues such as data scarcity, model interpretability, and privacy concerns. By effectively addressing these challenges and capitalizing on emerging trends, computer vision stands poised to make profound advancements with far-reaching implications. This comprehensive overview aims to provide a solid foundation for understanding the field of computer vision and its potential impact across multiple industries and applications.

Yu Xun Zheng and K.-W. (G. H.) A. Chee—These authors contributed equally and share first authorship.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hubel, David H., and Torsten N. Wiesel. “Receptive fields of single neurones in the cat's striate cortex.“ The Journal of physiology 148.3 (1959): 574.

    Article  Google Scholar 

  2. Roberts, Lawrence G. Machine perception of three-dimensional solids. Diss. Massachusetts Institute of Technology, 1963.

    Google Scholar 

  3. Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W.H. Freeman.

    Google Scholar 

  4. Lowe, David G. “Object recognition from local scale-invariant features.“ Proceedings of the seventh IEEE international conference on computer vision. Vol. 2. Ieee, 1999.

    Google Scholar 

  5. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.“ Communications of the ACM 60.6 (2017): 84–90.

    Article  Google Scholar 

  6. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.

  7. He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778.

    Google Scholar 

  8. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1–9.

    Google Scholar 

  9. Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580–587.

    Google Scholar 

  10. Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779–788.

    Google Scholar 

  11. Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector[C]//Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14. Springer International Publishing, 2016: 21–37.

    Google Scholar 

  12. Girshick R. Fast r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2015: 1440–1448.

    Google Scholar 

  13. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 3431–3440.

    Google Scholar 

  14. Chen L C, Papandreou G, Kokkinos I, et al. Semantic image segmentation with deep convolutional nets and fully connected crfs[J]. arXiv preprint arXiv:1412.7062, 2014.

  15. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18. Springer International Publishing, 2015: 234–241.

    Google Scholar 

  16. He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2961–2969.

    Google Scholar 

  17. Cao Z, Simon T, Wei S E, et al. Realtime multi-person 2d pose estimation using part affinity fields[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7291–7299.

    Google Scholar 

  18. Kreiss S, Bertoni L, Alahi A. Pifpaf: Composite fields for human pose estimation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 11977–11986.

    Google Scholar 

  19. Furukawa Y, Hernández C. Multi-view stereo: A tutorial[J]. Foundations and Trends® in Computer Graphics and Vision, 2015, 9(1–2): 1–148.

    Google Scholar 

  20. Schonberger J L, Frahm J M. Structure-from-motion revisited[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 4104–4113.

    Google Scholar 

  21. Newcombe R A, Izadi S, Hilliges O, et al. Kinectfusion: Real-time dense surface mapping and tracking[C]//2011 10th IEEE international symposium on mixed and augmented reality. Ieee, 2011: 127–136.

    Google Scholar 

  22. Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision[C]//IJCAI'81: 7th international joint conference on Artificial intelligence. 1981, 2: 674–679.

    Google Scholar 

  23. Horn B K P, Schunck B G. Determining optical flow[J]. Artificial intelligence, 1981, 17(1–3): 185–203.

    Article  MATH  Google Scholar 

  24. Farneback G. Two-frame motion estimation based on polynomial expansion[C]//Image Analysis: 13th Scandinavian Conference, SCIA 2003 Halmstad, Sweden, June 29–July 2, 2003 Proceedings 13. Springer Berlin Heidelberg, 2003: 363–370.

    Google Scholar 

  25. Lowe D G. Object recognition from local scale-invariant features[C]//Proceedings of the seventh IEEE international conference on computer vision. Ieee, 1999, 2: 1150–1157.

    Google Scholar 

  26. Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features (SURF)[J]. Computer vision and image understanding, 2008, 110(3): 346–359.

    Article  Google Scholar 

  27. Dalal N, Triggs B. Histograms of oriented gradients for human detection[C]//2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05). Ieee, 2005, 1: 886–893.

    Google Scholar 

  28. Platt J. Sequential minimal optimization: A fast algorithm for training support vector machines[J]. 1998.

    Google Scholar 

  29. Quinlan J R. Induction of decision trees[J]. Machine learning, 1986, 1: 81–106.

    Article  Google Scholar 

  30. Breiman L. Random forests[J]. Machine learning, 2001, 45: 5–32.

    Article  MATH  Google Scholar 

  31. Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. nature, 1986, 323(6088): 533–536.

    Google Scholar 

  32. LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324.

    Article  Google Scholar 

  33. Elman J L. Finding structure in time[J]. Cognitive science, 1990, 14(2): 179–211.

    Article  Google Scholar 

  34. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139–144.

    Article  MathSciNet  Google Scholar 

  35. Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space[J]. arXiv preprint arXiv:1301.3781, 2013.

  36. Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28.

    Google Scholar 

  37. Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7263–7271.

    Google Scholar 

  38. Redmon J, Farhadi A. Yolov3: An incremental improvement[J]. arXiv preprint arXiv:1804.02767, 2018.

  39. Bochkovskiy A, Wang C Y, Liao H Y M. Yolov4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv:2004.10934, 2020.

  40. https://github.com/ultralytics/yolov5

  41. Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part I 13. Springer International Publishing, 2014: 818–833.

    Google Scholar 

  42. Tan M, Pang R, Le Q V. Efficientdet: Scalable and efficient object detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 10781–10790.

    Google Scholar 

  43. Duan K, Bai S, Xie L, et al. Centernet: Keypoint triplets for object detection[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6569–6578.

    Google Scholar 

  44. Carion N, Massa F, Synnaeve G, et al. End-to-end object detection with transformers[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. Springer International Publishing, 2020: 213–229.

    Google Scholar 

  45. Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 10012–10022.

    Google Scholar 

  46. Liao M, Shi B, Bai X, et al. Textboxes: A fast text detector with a single deep neural network[C]//Proceedings of the AAAI conference on artificial intelligence. 2017, 31(1).

    Google Scholar 

  47. Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 39(12): 2481–2495.

    Article  Google Scholar 

  48. Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2881–2890.

    Google Scholar 

  49. Chen L C, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation[J]. arXiv preprint arXiv:1706.05587, 2017.

  50. Chen L C, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 801–818.

    Google Scholar 

  51. Arcos-García Á, Alvarez-Garcia J A, Soria-Morillo L M. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods[J]. Neural Networks, 2018, 99: 158–165.

    Article  Google Scholar 

  52. Porzi L, Bulo S R, Kontschieder P. Improving panoptic segmentation at all scales[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 7302–7311.

    Google Scholar 

  53. Xiang T, Zhang C, Liu D, et al. BiO-Net: learning recurrent bi-directional connections for encoder-decoder architecture[C]//Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I 23. Springer International Publishing, 2020: 74–84.

    Google Scholar 

  54. Alom, M.Z., Yakopcic, C., Taha, T.M., Asari, V.K.: Nuclei segmentation with recurrent residual convolutional neural networks based u-net (r2u-net). In: IEEE National Aerospace and Electronics Conference. pp. 228–233. IEEE (2018).

    Google Scholar 

  55. https://captain-whu.github.io/DOTA/index.html.

  56. Cai Z, Vasconcelos N. Cascade r-cnn: Delving into high quality object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 6154–6162.

    Google Scholar 

  57. Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2117–2125.

    Google Scholar 

  58. Ding J, Xue N, Long Y, et al. Learning roi transformer for oriented object detection in aerial images[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 2849–2858.

    Google Scholar 

  59. Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492–1500.

    Google Scholar 

  60. Li L, Bao J, Zhang T, et al. Face x-ray for more general face forgery detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 5001–5010.

    Google Scholar 

Download references

Acknowledgements

Y. X. Zheng is greatly indebted to co-first author Professor K.-W. (G. H.) A. Chee for the conceptualization and the leading of the manuscript preparation, and the addition and editing of substantial technical content, data collation, and project supervision. This work was supported by the BK21 (Electronic Electric Convergence Talent Nurturing Education Research Center) funded by the Ministry of Education of Korea, the Dong-il Cultural Scholarship Foundation, and the Kyungpook National University Research Fund 2021.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K.-W. (G. H.) A. Chee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zheng, Y.X., Chee, KW., Paul, A., Kim, J., Lv, H. (2023). Electronics Engineering Perspectives on Computer Vision Applications: An Overview of Techniques, Sub-areas, Advancements and Future Challenges. In: Daimi, K., Alsadoon, A., Coelho, L. (eds) Cutting Edge Applications of Computational Intelligence Tools and Techniques. Studies in Computational Intelligence, vol 1118. Springer, Cham. https://doi.org/10.1007/978-3-031-44127-1_6

Download citation

Publish with us

Policies and ethics