1 Introduction

The latest-generation earth observation instruments on airborne and satellite platforms are currently producing an almost continuous high-dimensional data stream. This exponentially growing data poses a new challenge for real-time image processing and recognition. Therefore, real-time image data processing on satellites plays a great important role in the field of space applications. In the past, due to the technical limitations, image processing systems on satellite can only process real-time signals with low data rates and low storage requirements, while image processing for massive data is difficult to achieve. In recent years, with the development of technologies such as the new Aerospace Digital Signal Processor (DSP) and large-scale anti-radiation field programmable gate array (FPGA), the real-time image data processing on satellites has sufficient technical conditions.

Real-time image data processing on satellites replaces the traditional way that data processing is completed after the original data transfer to the ground system. The main advantages are: (1) It is not necessary to compress and transmit the original image data collected by the sensor, which can provide higher precision raw data to the processor. (2) Image processing on satellites can effectively reduce the communication overhead between satellite and ground equipment. (3) Image processing on satellites can reduce the overhead of ground data processing equipment. (4) Image processing on satellites, and the result can be obtained in real-time, so that the astronauts can respond faster to the target operation.

Real-time image data processing on satellites has its limitations. It requires spacecraft to provide enough space for image processing equipment and also makes partial power consumption. However, due to the continuous development of image processing technology, the impact of these shortcomings is gradually reduced. It is believed that the benefits of image processing on satellites will overcome its limitations, and the real-time image processing system will become an important and complete component of the spacecraft.

2 Real-time image processing

This special issue consists of 17 papers addressing different aspects of real-time image processing for remote sensing applications.

The first paper with the title “Fast dimensionality reduction and classification of hyperspectral images with extreme learning machines” by Haut et al. [1] presents a real-time method for dimensionality reduction and classification of hyperspectral images using artificial neural networks. The proposed extreme learning machine (ELM)-based method can produce better compression/decompression error reconstruction and is faster than other fast approaches such as the non-negative matrix factorization (NMF), independent component analysis (ICA) and multi-layer perceptron (MLP) due to the non-iterative nature of the single layer ELM.

The second paper entitled “GPU-based fast hyperspectral image classification using joint sparse representation with spectral consistency constraint” by Pan et al. [2] proposes a joint sparse representation classifier with spectral consistency constraint (JSRC-SCC) for hyperspectral image classification. A parallel implementation on GPUs is developed to expedite the JSRC-SCC classification.

The third paper entitled “Real-time deep satellite image quality assessment” by Risnandar et al. [3] develops deep convolutional neural networks for satellite image quality assessment. The proposed algorithm is based on no-reference satellite images for removing various distorted satellite images in real-time remote sensing. Compared to other methods, the proposed approach reduces the terms of shift-add operation in exponential, logarithmic and trigonometric functions, leading to superior computation efficiency.

The fourth paper entitled “Real-time multi-aircraft tracking in aerial scene with deep orientation network” by Maher et al. [4] proposes a deep-patch orientation network (DON) to learn the target’s orientation based on the structure information in the training samples. Based on this DON structure, the proposed method is efficient for real-time ground target-tracking scenarios as the number of identity switches has reduced about 67%.

The fifth paper entitled “GPU implementation of RX detection using spectral derivative features” by Han et al. [5] introduces a novel implementation of RX algorithm on NVIDIA GeForce 1060 GPU with the utilization of derivative features for detecting anomalies in hyperspectral images. In this approach, the derivative is used before the data is sent to RX detector, which improves the detect performance. The GPU parallel implementation achieves real-time processing and also eliminates the storage burden of on-board processing.

The sixth paper entitled “Robust feature matching via Gaussian field criterion for remote sensing image registration” by Ma et al. [6] focuses on the issue of feature matching. The authors propose a robust feature matching method and applied it to the remote sensing image registration, in which a robust estimator with Gaussian field criterion is developed to perform effective mismatch removal. The underlying image transformation is modeled by homography and non-rigid function respectively from the linear and nonlinear perspectives. A sparse approximation is utilized to the non-rigid transformation to significantly reduce the computational complexity.

The seventh paper entitled “A suite of parallel algorithms for efficient band selection from hyperspectral images” by Fontanella et al. [7] presents a suite of parallel algorithms for efficient band selection from hyperspectral images. Specifically, to have an optimized C serial version of each algorithm, the OpenMP and CUDA versions have been derived respectively for multi-core CPU and multi-core graphics processing unit (GPU).

The eighth paper entitled “Fast hyperspectral band selection based on spatial feature extraction” by Cao et al. [8] presents a first attempt to use spatial feature extraction for reducing the dimensionality of band images and improving the band selection performance. The proposed method can dramatically reduce the dimensionality of each band image, which facilitates hyperspectral image band selection in real time.

The ninth paper entitled “Embedded GPU implementation of sensor correction for on-board real-time stream computing of high-resolution optical satellite imagery” by Wang et al. [9] addresses flexible and expandable on-board real-time processing with low power consumption for high-resolution optical satellites. By taking sensor correction as an example, the authors propose a feasible stream computing approach using a double-module data parallel pipeline system based on NVIDIA embedded graphics processing unit (GPU) platform to meet the on-board real-time sensor correction requirement.

The tenth paper entitled “Robust kernelized correlation filter with scale adaption for real-time single object tracking” by Li et al. [10] develops a new scale adaptive kernelized correlation filter (KCF) for real-time single object tracking. To this end, the coarse to fine tuning with Gaussian constraints based on KCF is proposed to precisely locate the target, and then the optimal scale of target is adaptively obtained by learning a one-dimensional correlation filter with the input of scale feature pyramid.

The eleventh paper entitled “A real-time unsupervised background extraction-based target detection method for hyperspectral imagery” by Li et al. [11] presents a real-time unsupervised background extraction-based target detection method which uses the endmember extraction to extract material signatures from the images. To inhibit the interferences, the target-constrained interference-minimized filter is used. The proposed algorithm can do better than adaptive coherence/cosine estimator (ACE) while processing on Field Programmable Gate Array (FPGA).

The twelfth paper entitled “Real-time image recognition using weighted spatial pyramid networks” by Zhu et al. [12] proposes an adaptive multipoint moment estimation (AMME) and a feature extraction method named weighted pooling. By combining the two methods, it forms an image recognition model—weighted spatial pyramid networks (WspNet) to improve the performance of real-time image recognition. Moreover, the algorithm uses the finite lower-order moments to improve the computing speed.

The thirteenth paper entitled “Polarimetric synthetic aperture radar image segmentation by convolutional neural network using graphical processing units” by Wang et al. [13] introduces a 11-layer deep convolutional neural network for polarimetric synthetic aperture radar image segmentation. By parallel implementation using GPU, the proposed method yields a 173 times acceleration on the training samples, and a 181 times acceleration on the test samples, compared to standard CPU.

The fourteenth paper entitled “Study of infrared reflection characteristics of aerial target using MODIS data on GPU” by Guo et al. [14] presents a sea surface emissivity model to model the thermal radiation of earth’s surface and atmospheric radiance. By calculating the reflection of background radiation incident from different directions in parallel, speedups of 9 times and 258 times are obtained using open multi-processing on a multi-core CPU and many-core graphics processing unit (GPU).

The fifteenth paper entitled “A hardware-efficient parallel architecture for real-time blob analysis based on run-length code” by Li et al. [15] develops a novel parallel algorithm of blob analysis to process objects with different types and sizes based on image data partition and multi-process units. What’s more, a dynamic convex hull calculation method is designed, which is beneficial for parallel processing and sub-block merging of connected component labeling. Finally, a parallel hardware structure of the proposed algorithm is designed and implemented on FPGA. The experimental results demonstrate that the proposed hardware architecture works more efficiently than the state-of-the-art methods.

The sixteenth paper entitled “FPGA implementation of collaborative representation algorithm for real-time hyperspectral target detection” by Wu et al. [16] presents a novel FPGA-based technique for efficient real-time target detection in hyperspectral images. The collaborative representation based target detection (CRD) algorithm is employed. To achieve high processing speed on the FPGA platform, the CRD algorithm reduces the dimensionality of hyperspectral image first. The Sherman–Morrison formula is utilized to calculate the matrix inversion to reduce the complexity of CRD algorithm. The experimental results reveal that the proposed system obtains shorter processing time of the CRD algorithm than that on 3.40 GHz CPU.

The seventeenth paper entitled “Parallel supervised land-cover classification system for hyperspectral and multispectral images” by Salgado et al. [17] introduces a novel classification system for both hyperspectral and multispectral images. It consists of a novel parallel feature extraction algorithm, which uses a cluster of two GPUs in combination with a multi-core CPU, and an improved Artificial Neural Network (ANN) for classification. The proposed classification system significantly reduces the computation time as compared to the non-parallel and CPU-only parallel implementations for multispectral and hyperspectral classification. Moreover, the proposed ANN achieves superior performance than Support Vector Machine (SVM) in terms of classification accuracy and inference time.

In summary, the contributions appearing in this special issue provide an excellent overview of a highly important topic. Combining the optimization of different image processing algorithms and the computational aspect of the theories of different methods further improves real-time image data processing on satellites.