Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Guest Editorial: Special Issue on Embedded Computer Vision

  • 740 Accesses

It is with great pleasure that we present this Special Issue of the Journal of Signal Processing Systems (JSPS) dedicated to Embedded Computer Vision! We are pleased to include six state-of-the-art papers from the leaders in this field, both from industry and academia, who keep pushing the embedded computer vision technology forward.

While the idea for this special issue originated between the Guest Editors at one of the CVPR workshops on the same topic that we have organized, it is the work of the contributing authors that makes it a success. The papers were solicited from the workshop participants and through an open call for papers, so the initial submissions were in many ways already pre-filtered. Out of 24 submitted papers, the highly selective review process yielded the six papers included here. They cover a broad range of challenges that are encountered in practical deployment of embedded vision systems, especially when high computational performance needs meet limited resources. We present papers describing a range of novel solutions: a deep learning accelerator, a robust aerial tracking system, an FPGA-based aerial visual servoing task solution, an approach to use low-cost hardware for real-time vision, a real-time motion detector, and an image enhancement approach based on human vision.

In the following we summarize their contributions:

  • In their paper “Efficient Object Detection Using Embedded Binarized Neural Networks, (https://doi.org/10.1007/s11265-017-1255-5)” the authors Jaeha Kung (Georgia Tech), David Zhang (SRI international), Gooitzen van der Wal (SRI international), Sek Chai (SRI international), and Saibal Mukhopadhyay (Georgia Tech) discuss their research on energy efficient deep learning accelerators and their application on vision-based object detection. The paper presents the training and implementation of a deep neural network with single bit weights, and it offers applications-based insights into the power-performance tradeoffs gained with a binarized approach.

  • In their paper “Watch Out: Embedded Video Tracking with BST for Unmanned Aerial Vehicles, (https://doi.org/10.1007/s11265-017-1279-x)” the authors Francesco Battistone (MER MEC S.p.A.), Alfredo Petrosino (University Parthenope of Naples), and Vincenzo Santopietro (University Parthenope of Naples) present a real-time tracking system that is able to efficiently run on an Nvidia Jetson board mounted on a UAV. The approach to long term video tracking implemented in Watch Out is named Best Structured Tracker (BST) and its performance has been verified both on challenging datasets and in real situations using an Nvidia Jetson board mounted on a drone. Results show that a robust system, like Watch Out, can track almost every possible target in real time.

  • In their paper “FPGA-Based Fast Response Image Analysis for Orientational Control in Aerial Manipulation Tasks, (https://doi.org/10.1007/s11265-017-1286-y)” the authors Robert Ladig, Suphachart Leewiwatwong, and Kazuhiro Shimonomura (all from Ritsumeikan University) explore the feasibility of using an on-board field-programmable gate array (FPGA) for aerial visual servoing tasks. They successfully designed a novel derivative of Fast Increment Hough Transform 2 (FIHT2) fit for implementation on an FPGA. The algorithm is able to identify the orientation and distance of bar-like objects even in cluttered environments, solely by using the FPGA and a monocular camera image. The algorithm is then tested against a practical real-life application, where a gripper mounted on an aerial robot is able to autonomously align towards a bar-like object using this method.

  • In their paper “Image Processing Units on Ultra-low-cost Embedded Hardware: Algorithmic Optimizations for Real-time Performance, (https://doi.org/10.1007/s11265-017-1267-1)” the authors Suraj Nair (TUM CREATE, Singapore), Nikhil Somani (TUM CREATE, Singapore), Artur Grunau (Technische Universitat Munchen), Emmanuel Dean-Leon (Technische Universität München), and Alois Knoll (Technische Universität München) discuss the growing popularity of low cost single board computers (SBC) and how they can be used to build real-time computer vision (CV) applications. One of the key challenges presented and addressed in the paper is how popular computer vision algorithms can be mapped to the graphics processor of such SBCs given no support for high-level GPU APIs such as CUDA or OpenCL. Taking RaspberryPi, one of the most commonly used SBCs, as a demonstration platform the authors present the re-engineering of CV algorithms to overcome the hardware limitations of the SBC’s GPU while still achieving real-time performance.

  • In their paper “Real-Time Embedded Motion Detection via Neural Response Mixture Modeling, (https://doi.org/10.1007/s11265-017-1265-3)” the authors Mohammad Javad Shafiee (University of Waterloo), Parthipan Siva (Aimetis Corporation), Paul Fieguth (University of Waterloo), and Alexander Wong (University of Waterloo) discuss their new research on utilizing deep neural networks for real-time applications on embedded systems. They propose a new framework to do real-time motion detection via the rich deep feature extracted from the neural response of an efficient, stochastically-formed deep neural network, the so-called StochasticNet. The neural response features are utilized to construct Gaussian mixture models to detect motion in a scene. The Neural response mixture (NeRM) model is examined and performed on an Axis surveillance camera in a real-time manner. Results show that the proposed NeRM approach can improve the GMM performance with fewer false detections and illustrates the possibility of utilizing deep neural networks on embedded devices in real-time applications.

  • In their paper “Digital Image Fusion Using HVS in Block Based Transforms, (https://doi.org/10.1007/s11265-017-1252-8)” the authors Vadhi Radhika (Jawaharlal Nehru Technological University), Kilari Veeraswamy (QIS College of Engineering and Technology), and Samayamantula Srinivas Kumar (Jawaharlal Nehru Technological University) present an image enhancement method based on human visual system (HVS) and block transforms. Their method shows improvements in achieved quality while having reduced complexity compared to earlier methods.

Embedded computer vision as a field has been at the forefront of solving practical issues of computer vision and we hope this special issue will help shed light on the many current and future applications that it has made possible.

Guest Editors of the JSPS Special Issue on Embedded Computer Vision.

Stefano Mattoccia, Branislav Kisačanin, Margrit Gelautz,

Sek Chai, Nabil Belbachir, Goksel Dedeoglu, Fridtjof Stein

Author information

Correspondence to Stefano Mattoccia.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mattoccia, S., Kisačanin, B., Gelautz, M. et al. Guest Editorial: Special Issue on Embedded Computer Vision. J Sign Process Syst 90, 873–876 (2018). https://doi.org/10.1007/s11265-018-1365-8

Download citation