Recent advances in computation, storage and acquisition technologies allow for building visual-based intelligent systems that are capable of analyzing the scene observed. Motion analysis, tracking and reasoning are in high demand to meet the needs for such intelligent systems. Prominent examples are traffic monitoring and control, as well as safety and security applications. Real-time vision-based intelligent systems addressing the challenges of motion analysis still suffer a trade-off between accuracy and computational complexity. In order to cope with real-time image processing challenges, some systems incorporate prediction or similar algorithms to alleviate processing time requirements; others choose to analyze selected frames in the processed video. Different approaches try to increase performance by decimating the motion field or by, automatically or manually, selecting a region of interest in a frame in which the motion field is then computed and analyzed.

This special issue on Real-Time Vision-Based Motion Analysis and Intelligent Transportation Systems incorporates contributions to the topic in theory and applications that encapsulate recent developments in real-time motion detection, analysis and tracking algorithms as well as applied systems.

For many years, while tackling highly correlated sets of problems, computer vision, and video and image processing were regarded as two parallel lines of research. Loosely speaking, while the computer vision community aims at realizing the scene observed in terms of feature extraction, optical flow and stereo, the video and image processing literature focuses on how to better represent the image both in terms of compactness (compression) and quality (usually through linear or non-linear filtering). Advances in video and computation technologies have narrowed this gap. This trend is well presented in this special issue. The papers included in this special issue cover the entire span of both disciplines, starting from algorithms for acquiring machine perception of the scene observed in real-time, through motion analysis under the scope of video codecs, up to hardware architecture and design for the task of motion analysis. This cohort of works positions this special issue as a unique source for real-time vision-based motion analysis and its utilization in intelligent systems.

The first paper in this special issue describes an algorithm for selecting an optimal subset of cameras, out of a set of CCTV cameras, so maximum activity is observed. The underlying concept is the use of sensor semantics to guide the selection of the cameras’ subset. At the initialization process of the algorithm, the probabilistic model of motion correlations between cameras is observed and studied. Then, this correlation model is utilized for optimized selection of the cameras at the next time interval, based on the observations in the current time interval.

The second paper addresses scene reasoning from a different perspective, where algorithms and implementation of a library for real-time body posture recognition are described. Body posture could reveal pedestrian’s intent and her awareness of the oncoming car. In this case, the analysis is based on individual frames. A syntactic post-processing module takes temporal information into account and smoothes the results over time while correcting improbable configurations.

The third paper in this issue addresses a more general question by suggesting an algorithm for decision making based on a set of classifiers. The proposed framework is a generalized version of the online boosting algorithm. This generalized version dynamically builds a cascade of classifiers for speeding-up the Online Boosting technique, while maintaining real-time performance. The latter is achieved by automatically adjusting the tradeoff between speed and accuracy. The suggested method has a broad range of possible applications. Its applicability to object detection and tracking in video surveillance is presented.

The fourth paper exploits motion analysis and its characteristics for devising a more efficient video compression mechanism. The suggested algorithm speeds up compression by exploiting natural motion behavior through performing low-resolution, thus coarse, motion estimation in the DCT domain. The proposed video encoder architecture is based on the conventional hybrid coder and on a set of fast integer composition and decomposition DCT transforms.

The fifth paper addresses motion analysis both on the algorithmic and the hardware design levels. The paper suggests the application of motion estimation techniques not on the original video, but on the estimated illumination of the scene. To comply with real-time constraints, illumination extraction is mainly based on rational filters, realized in a recursive way. The motion estimation utilizes block-matching techniques.

In the sixth paper, a two-level real-time vision instrument is described that combines coarse-grain and fine-grain image information to extract features for real-time machine vision applications. The authors use computer architectural abstractions to represent data both on GPU and multiple CPUs to efficiently perform real-time image analysis on a vision platform. This method is particularly useful for processing vision information using a parallel architecture on a multi-core platform at high frame rates.

To conclude, we would like to thank the authors, who contributed to this special issue by reporting about their cutting-edge research, their findings and their achievements. Additionally, a special thank you is sent to the individuals that made this special issue possible. The reviewers are particularly thanked for their insightful and most important comments; their feedback improved the final output, and is a major contributor in generating this fine collection of works. We are thankful for all the help and support we have received from the editorial office staff. Finally, the editors of this issue would like to thank the Journal of Real-Time Image Processing Chief Editors, Nasser Kehtarnavaz and Matthias F. Carlsohn, who invited us to serve as editors of this special issue. We are confident that the selected papers will contribute to the on-going research in real-time image and video processing in academia, industry, and other interested agencies and organizations.