Although we are still in the first quarter of 2016, this is the third issue of volume 11 of the Journal of Real-Time Image Processing. As explained in the previous editorials, there is going to be two volumes or a total of eight issues appearing in 2016. The extra issues are arranged for the purpose of addressing the backlog of the papers that have already appeared online and are waiting to appear in print. We would like to bring to your attention some incentives that we are providing to authors for submitting quality manuscripts to the Journal of Real-Time Image Processing. One incentive is the Reviewer Reward Programme whose details we had previously mentioned in an editorial and are described at this link: http://www.springer.com/computer/image+processing/journal/11554/PSE?detailsPage=press.

Another incentive is being provided to those authors who have made remarkable contributions to the journal and to the real-time image processing community as measured in terms of their article downloads being in the “top ten” list. Table 1 provides the list of such articles as of November 2015. Besides getting cited as the top downloaded articles, such articles are now given a higher priority in the printing order that has been exercised by the journal in the last 10 years, that is a first-in first-out ordering scheme. This scheme has involved appearance of accepted articles in a print issue either as part of a regular issue or as part of a special issue in the order they appeared as Online First articles. Special issues have been offered in between regular issues as a collection of focused articles on a contemporary or hot subject of interest. We begin providing this incentive by including one such paper in this issue, which is the first paper (printed in bold in Table 1). This paper has been selected among the qualified papers in both regular and special issues (noted in italics in Table 1) in terms of their large numbers of downloads.

Table 1 Top-ten list of most downloaded online first published JRTIP articles

Due to the steady and significant growth in the reputation of JRTIP, this year so far we are witnessing more than 25 % increase in the number of manuscript submissions compared to the same period in the last 3 years. This is partially attributed to the increase in the impact factor of JRTIP and is partially due to several special issue calls for papers that are included in the back matter of this issue.

It is worth mentioning that the increase in the number of submissions has also led to an increase in the number of submitted manuscripts that fall outside the real-time scope of the journal. JRTIP is dedicated to the real-time aspects of image processing such as computational complexity reduction compared to existing solutions, real-time hardware implementation on various processors or platforms, actual real-time processing rates, and real-time software optimization of image processing. Therefore, before submitting their manuscripts, authors are strongly advised to visit the JRTIP’s homepage http://www.springer.com/11554 to make sure that their manuscripts are a good fit to the real-time focus or theme of JRTIP.

One thing that we have also noticed is an increase in non-professional behavior in manuscript submissions such as repeated submissions of previously rejected manuscripts without making any changes, duplicate submissions to other journals and conferences, or an excessive degree of self-plagiarism from previously published articles. We take such situations very seriously and have flagged the authors of such manuscripts in a black list in Editorial Manager from whom future submissions would not be accepted. As a result of an increase in such unprofessional behavior, we urge reviewers to visit the link http://www.springer.com/gp/authors-editors/author-academy and to use the online course materials to better familiarize themselves as how to detect such problems. In particular, we recommend authors with little or no manuscript writing experience to go through the online course material Peer Review for authors at the link http://academy.springer.com/publish-journal-manuscript#.VoqlYFLiiZY, and for reviewers to go through the online course material Peer Review for Reviewers at the link http://academy.springer.com/peer-review-academy#.Voqle1LiiZY. These course materials would allow one to become more familiar with ethical, scientific rules and standards for conducting a fair and independent reviewing process.

At this point, we wish to acknowledge the efforts of and thank the Associate Editors Maria A. Amer, Madhukar Budagavi and Paolo Nesi who have been Associate Editors since the inception of the journal, as well as the Associate Editor Vinay Sharma, who are leaving the editorial board due to their other obligations and commitments.

This third issue of volume 11 comprises a total of 13 papers all being original research articles. Five themes are noted in these papers which involve: (1) fast noise removal and noise diffusion (2 papers), (2) real-time image multi-camera and disparity map processing (2 papers), (3) real-time motion estimation and tracking (3 papers), (4) real-time processing applications (3 papers) and (5) real-time implementation (3 papers). A brief summary of these papers are outlined below which are ordered by these themes.

The first paper by Malinski et al. on “Fast averaging peer group filter for the impulsive noise removal in color images” received much attention in terms of the number of downloads. It offers a new approach to the impulsive noise removal in color images. A peer group concept is used to determine the filtering design via the membership of a central pixel of a filtering window to its local neighborhood plus the distance between pixel pairs in a color space according to some thresholds. Pixels are considered not to be corrupted if they provide at least two close pixels in their peer groups, otherwise a weighted average of uncorrupted pixels from a local neighborhood is used. The size of peer groups is used to assign a property to each associated pixel as a weight for averaging, e.g. large cliques of peers have higher weights. The low computational complexity of filtering allows restoring color images corrupted by strong impulsive noise, while preserving tiny image details also under real-time constraints because of its low computational complexity.

The second paper by Rifkah et al. investigates “Non-linear diffusion (ND) of image noise with minimal iterativity” involves an iterative difference equation used in image processing for denoising, segmentation, or compression. The number of iterations needed can be too high under real-time constraints. This paper seeks options to reduce the complexity of ND, targets a minimal number of iterations for real-time image denoising, and investigates the relationships between the parameters of the iterative equation in terms of number of iterations, time step and edge strength to find an estimate for a minimal number of iterations to achieve effective denoising. The relationships among edge strength, number of iterations, noise and image structure are evaluated. The resulted minimal number of ND iterations is low, while still achieving a similar or better noise reduction compared to existing ND works. The corresponding spatial filter is suitable for structure-sensitive object segmentation and temporal noise reduction.

The third paper by Guler et al. presents “Real-time multi-camera video analytics system on GPU” discusses a parallel implementation on a graphics processing unit (GPU) for intelligent video surveillance. Based on background subtraction, the system is composed of several functionalities including motion detection, camera sabotage detection, abandoned object detection and object-tracking algorithms, each of which with GPU implementations performing at different speed-up rates. The tests conducted have confirmed that even for the case that all the algorithms run concurrently, the parallelization on the GPU boosts the system up to nearly 22 times faster than a conventional CPU implementation, thus enabling real-time analysis using multiple camera set-ups.

The fourth paper by Santos et al. on “Scalable hardware architecture for disparity map computation and object location in real-time” presents a disparity map computation core of a hardware system for isolating foreground objects in stereoscopic video streams. The operation is based on the computation of dense disparity maps using block-matching algorithms via sum of absolute differences (SAD) and census transform (CT) as metrics. Two sets of disparity maps are computed by taking each of the images as reference so that a consistency check is performed to identify occluded and eliminate spurious foreground pixels. The proposed parallel architecture is scalable and allows adaptation to different application needs, performance levels and resource usage. One system implementation on a Xilinx Virtex II-Pro FPGA and two cameras providing VGA images have led to a processing rate of 25 fps for a maximum disparity of 135 pixels. Implementation of the same system on a Virtex-5 FPGA is estimated to achieve 80 fps, while a version with increased parallelism is estimated to run at 140 fps.

The fifth paper by Po et al. on “An adaptive motion compensation method using superimposed inter-frame signals” proposes a multi-hypothesis motion compensation prediction (MHMCP) idea that can enhance the prediction quality of motion compensation prediction. Using an estimated distortion ratio, a weighting pair is adaptively determined for the linear combination of two signal blocks to form a prediction block with lower distortion. This MCP method does not need additional side information to be transmitted as with classical MHMCP and has better prediction accuracy than conventional motion compensation predictions. In conjunction with its low algorithmic decision overhead, it can be implemented in hardware to support the realization of high-quality video coding in real-time.

The sixth paper by Yin et al. on “Buffer structure optimized VLSI architecture for efficient hierarchical integer pixel motion estimation implementation” discusses the crucial module IME having high complexity in high-definition video encoder. An efficient joint design of algorithm and architecture is suggested to achieve a trade-off among multiple target parameters including throughput capacity, logic gate, on-chip SRAM size, memory bandwidth, and rate distortion performance. This approach combines a global hierarchical search and a local full search to reach a hardware efficient IME algorithm together with VLSI architecture of an optimized on-chip buffer structure. The major contributions of this paper are: (1) improved hierarchical IME algorithm with pre-search and deliberate data organization, (2) a multistage on-chip reference pixel buffer structure with high data reuse between integer and fraction pixel motion estimations, and (3) a highly reused and reconfigurable processing element structure achieving approximately 70 % buffer saving with less than 0.08 dB PSNR degradation on average compared with the full search-based architecture achieving a throughput of 384 and 272 cycles per macroblock, at the system frequency of 95 and 264 MHz, respectively, for 1080p and QFHD @30fps video coding.

The seventh paper by Varfolomieiev et al. deals with “An improved algorithm of median flow for visual object tracking and its implementation on ARM platform” wherein the improvement comprises an adaptive selection of aperture window size and number of pyramid levels for optical flow estimation achieving an increasing tracking efficiency for small and low-contrast objects as compared to some existing algorithms. The implementation using OpenCV library was tested on the OMAP 35x EVM and Beagle Board-xM that use Texas Instruments’ OMAP3530 and DM3730 processors, respectively. The results have indicated the versatility and computational robustness of the algorithm for embedded applications involving ARM processors.

The eighth paper by Todorovich et al. on “Real-time speckle image processing” describes an optical phenomenon produced when a laser light is reflected from an illuminated surface that itself shows some kind of activity allowing non-destructive inspection of conditions such as seed viability, paint drying, bacterial activities, corrosion processes, food decomposition, fruit bruising, etc. Real-time analyses of such processes enable commercial, biological and technological applications of interest. A digital system implemented on field programmable gate array (FPGA) based on granular computing is presented that characterizes speckle dynamics in the time domain. The clock periods and latencies achieved enable speckle images of size 512 × 512 to be processed under real-time constraints with a maximum throughput of about thousand frames per second.

The ninth paper by Torres-Huitzil on “FPGA-based fast computation of gray-level morphological granulometries” presents a useful and versatile image analysis technique applied to a wide range of tasks ranging from size distribution of objects to feature extraction and texture characterization in industrial and research applications. Granulometries based on sequences of openings with structuring elements (SEs) of increasing size are computationally demanding on general purpose hardware. A pipelined hardware architecture centred on two systolic-like processing arrays is devised for fast computation of gray-level morphological granulometries to process flat SEs of different shapes and sizes. The simulation of the architecture on an FPGA has enabled the validation of the proposed scheme to compute particle size distributions on image sizes 512 × 512 pixel with flat non-rectangular SEs of up to 51 × 51 elements in approx. 60 ms at a clock frequency of 260 MHz achieving speed-up factors of two orders of magnitude compared to pure software implementations, which competes favorably with similar architectures and optimized high-performance graphical processing unit implementations.

The tenth paper by Anders et al. on “A hardware/software prototyping system for driving assistance investigations” discusses a holistic design and verification environment for investigating driving assistance systems with emphasis on system-on-chip architectures for video applications. Starting with the specification of a driving assistance application, subsequent transformations are performed across different levels of abstraction until the final implementation is achieved. The hardware/software partitioning is facilitated through an integration of OpenCV and SystemC in the same design environment, as well as OpenCV and Linux in the run-time system as a rapid prototyping, FPGA-based camera system, which allows designs to be explored and evaluated under realistic conditions. Using the “lane departure” application, the platform demonstrated reduced design time and improved verification efforts.

The 11th paper by Momcilovic et al. on “Exploiting task and data parallelism for advanced video coding on hybrid CPU + GPU platforms” introduces prevalent usage of multimedia applications on commodity computers equipped with both CPU and GPU devices and their potential parallelization capabilities as a hybrid platform for high-performance video encoding. Accordingly, H.264/advanced video coding (AVC) inter-loop on hybrid GPU + CPU platforms is implemented concurrently comprising dynamic dependency aware task distribution methods and real-time computational load and balancing among computational resources. The set of optimized parallel algorithms for video coding on both CPU and GPU is dynamically instantiated in any of the existing processing units to minimize the overall encoding time. The proposed model provides efficient task scheduling and load balancing for H.264/AVC inter-loop, without increasing the computational burden for the time-limited video coding application. The experimental results have proved to provide speedup values by a factor 2.5 compared to optimized GPU-only encoding implementations. Standard equipped off-the-shelf computers achieve inter-loop encoding rates up to 40 fps for HD 1920 × 1080 resolution.

The 12th paper by Fishbain et al. on “A competitive study of the pseudo-flow algorithm for the minimum s–t cut problem in vision applications” addresses the minimum s–t cut problem as a classical combinatorial optimization problem and a prominent building block in many vision and imaging algorithms such as video segmentation, stereo vision, multi-view reconstruction, and surface fitting, etc. This paper introduces the Hochbaum’s pseudo-flow (HPF) algorithm to computer vision because it optimally solves the minimum s–t cut problem. The HPF’s performance is compared w.r.t. execution times and memory utilizations of three leading algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, the results demonstrate that HPF algorithm is more efficient and utilizes less memory than the previously mentioned algorithms (1–3) suggesting the HPF as an option for many real-time computer-vision problems that need to have the minimum s–t cut problem solved.

The 13th and last paper of this regular issue is the paper by Mabrouk et al. on “A computationally efficient technique for real-time detection of particular-slope edges” proposing the identification of oblique lines of a particular slope as needed for various applications such as motion tracking for smart cameras. This paper presents a computationally efficient technique for detecting edges of a particular slope. The angle of the edges is converted into pixel increments over rows and columns as parameters forming parallel, oblique lines of a particular slope. A first-order Haar low-pass filter (LPF) is used to filter out undesired edges. The hardware architecture of the proposed technique is described including timing issues, fixed-point implementation and line-base memory requirement reduced to two registers only to demonstrate the computational advantages of the method to Sobel, Canny and HT detectors.