Abstract
This chapter provides a strategic overview of applications in the computer vision domain. We initially introduce the etymology of computer vision, main tasks, key techniques, and algorithms. Traditional feature extraction methods and deep learning techniques, including prominent algorithms like Region-Based Convolutional Neural Network (R-CNN) and You Only Look Once (YOLO), are explored. We discuss important sub-areas such as image classification, object detection, and image semantic segmentation. The versatility of computer vision is showcased, particularly in autonomous vehicles, healthcare, and surveillance. Furthermore, we delve into the challenges and potential of computer vision, highlighting the necessity for advanced algorithmic methodologies, efficient hardware, robust privacy protections, and conscientious ethical considerations. We also explore upcoming trends, including cross-modal learning, sophisticated ‘vision GPT’ models, and unified models that share architecture and parameters across different tasks. These future directions indicate a transformative impact across various sectors, encompassing autonomous driving, healthcare imaging, and e-commerce. Additionally, we outline the future challenges and trends in the field, underscoring the significance of continuous research and development to address issues such as data scarcity, model interpretability, and privacy concerns. By effectively addressing these challenges and capitalizing on emerging trends, computer vision stands poised to make profound advancements with far-reaching implications. This comprehensive overview aims to provide a solid foundation for understanding the field of computer vision and its potential impact across multiple industries and applications.
Yu Xun Zheng and K.-W. (G. H.) A. Chee—These authors contributed equally and share first authorship.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hubel, David H., and Torsten N. Wiesel. “Receptive fields of single neurones in the cat's striate cortex.“ The Journal of physiology 148.3 (1959): 574.
Roberts, Lawrence G. Machine perception of three-dimensional solids. Diss. Massachusetts Institute of Technology, 1963.
Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W.H. Freeman.
Lowe, David G. “Object recognition from local scale-invariant features.“ Proceedings of the seventh IEEE international conference on computer vision. Vol. 2. Ieee, 1999.
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.“ Communications of the ACM 60.6 (2017): 84–90.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778.
Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1–9.
Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580–587.
Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779–788.
Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector[C]//Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14. Springer International Publishing, 2016: 21–37.
Girshick R. Fast r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2015: 1440–1448.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 3431–3440.
Chen L C, Papandreou G, Kokkinos I, et al. Semantic image segmentation with deep convolutional nets and fully connected crfs[J]. arXiv preprint arXiv:1412.7062, 2014.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18. Springer International Publishing, 2015: 234–241.
He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2961–2969.
Cao Z, Simon T, Wei S E, et al. Realtime multi-person 2d pose estimation using part affinity fields[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7291–7299.
Kreiss S, Bertoni L, Alahi A. Pifpaf: Composite fields for human pose estimation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 11977–11986.
Furukawa Y, Hernández C. Multi-view stereo: A tutorial[J]. Foundations and Trends® in Computer Graphics and Vision, 2015, 9(1–2): 1–148.
Schonberger J L, Frahm J M. Structure-from-motion revisited[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 4104–4113.
Newcombe R A, Izadi S, Hilliges O, et al. Kinectfusion: Real-time dense surface mapping and tracking[C]//2011 10th IEEE international symposium on mixed and augmented reality. Ieee, 2011: 127–136.
Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision[C]//IJCAI'81: 7th international joint conference on Artificial intelligence. 1981, 2: 674–679.
Horn B K P, Schunck B G. Determining optical flow[J]. Artificial intelligence, 1981, 17(1–3): 185–203.
Farneback G. Two-frame motion estimation based on polynomial expansion[C]//Image Analysis: 13th Scandinavian Conference, SCIA 2003 Halmstad, Sweden, June 29–July 2, 2003 Proceedings 13. Springer Berlin Heidelberg, 2003: 363–370.
Lowe D G. Object recognition from local scale-invariant features[C]//Proceedings of the seventh IEEE international conference on computer vision. Ieee, 1999, 2: 1150–1157.
Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features (SURF)[J]. Computer vision and image understanding, 2008, 110(3): 346–359.
Dalal N, Triggs B. Histograms of oriented gradients for human detection[C]//2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05). Ieee, 2005, 1: 886–893.
Platt J. Sequential minimal optimization: A fast algorithm for training support vector machines[J]. 1998.
Quinlan J R. Induction of decision trees[J]. Machine learning, 1986, 1: 81–106.
Breiman L. Random forests[J]. Machine learning, 2001, 45: 5–32.
Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. nature, 1986, 323(6088): 533–536.
LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324.
Elman J L. Finding structure in time[J]. Cognitive science, 1990, 14(2): 179–211.
Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139–144.
Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space[J]. arXiv preprint arXiv:1301.3781, 2013.
Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28.
Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7263–7271.
Redmon J, Farhadi A. Yolov3: An incremental improvement[J]. arXiv preprint arXiv:1804.02767, 2018.
Bochkovskiy A, Wang C Y, Liao H Y M. Yolov4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv:2004.10934, 2020.
Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part I 13. Springer International Publishing, 2014: 818–833.
Tan M, Pang R, Le Q V. Efficientdet: Scalable and efficient object detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 10781–10790.
Duan K, Bai S, Xie L, et al. Centernet: Keypoint triplets for object detection[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6569–6578.
Carion N, Massa F, Synnaeve G, et al. End-to-end object detection with transformers[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. Springer International Publishing, 2020: 213–229.
Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 10012–10022.
Liao M, Shi B, Bai X, et al. Textboxes: A fast text detector with a single deep neural network[C]//Proceedings of the AAAI conference on artificial intelligence. 2017, 31(1).
Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 39(12): 2481–2495.
Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2881–2890.
Chen L C, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation[J]. arXiv preprint arXiv:1706.05587, 2017.
Chen L C, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 801–818.
Arcos-García Á, Alvarez-Garcia J A, Soria-Morillo L M. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods[J]. Neural Networks, 2018, 99: 158–165.
Porzi L, Bulo S R, Kontschieder P. Improving panoptic segmentation at all scales[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 7302–7311.
Xiang T, Zhang C, Liu D, et al. BiO-Net: learning recurrent bi-directional connections for encoder-decoder architecture[C]//Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I 23. Springer International Publishing, 2020: 74–84.
Alom, M.Z., Yakopcic, C., Taha, T.M., Asari, V.K.: Nuclei segmentation with recurrent residual convolutional neural networks based u-net (r2u-net). In: IEEE National Aerospace and Electronics Conference. pp. 228–233. IEEE (2018).
Cai Z, Vasconcelos N. Cascade r-cnn: Delving into high quality object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 6154–6162.
Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2117–2125.
Ding J, Xue N, Long Y, et al. Learning roi transformer for oriented object detection in aerial images[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 2849–2858.
Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492–1500.
Li L, Bao J, Zhang T, et al. Face x-ray for more general face forgery detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 5001–5010.
Acknowledgements
Y. X. Zheng is greatly indebted to co-first author Professor K.-W. (G. H.) A. Chee for the conceptualization and the leading of the manuscript preparation, and the addition and editing of substantial technical content, data collation, and project supervision. This work was supported by the BK21 (Electronic Electric Convergence Talent Nurturing Education Research Center) funded by the Ministry of Education of Korea, the Dong-il Cultural Scholarship Foundation, and the Kyungpook National University Research Fund 2021.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Zheng, Y.X., Chee, KW., Paul, A., Kim, J., Lv, H. (2023). Electronics Engineering Perspectives on Computer Vision Applications: An Overview of Techniques, Sub-areas, Advancements and Future Challenges. In: Daimi, K., Alsadoon, A., Coelho, L. (eds) Cutting Edge Applications of Computational Intelligence Tools and Techniques. Studies in Computational Intelligence, vol 1118. Springer, Cham. https://doi.org/10.1007/978-3-031-44127-1_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-44127-1_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44126-4
Online ISBN: 978-3-031-44127-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)