1 Introduction

In ITS areas, detection methods using cameras can be used for navigation, safe driving, surveillance, and sustaining results from other sensors. In traditional ITS applications, vehicles are the main targets. Currently pedestrians are also considered as important subjects of ITS applications, and bicycles also are becoming very popular for environmental and economical reasons. In Japan, the number of traffic accidents among bicycles and pedestrians is very large. Thus we tackle an issue of detecting freely moving bicycle riders and pedestrians from the data collected by a camera which keeps them under surveillance from the top. These situations can be observed in parks, university campuses, station squares, tourist spots, etc. Here we focus on techniques from the area of computer vision for detection under surveillance scenarios. It is also assumed by the method that target objects captured by the surveillance cameras do not change much in scale.

Most state-of-the-art visual detection methods fall into two main categories: sliding-window methods and Hough transform based methods. The methods [10, 27] based on a sliding window schema perform detection in a typical machinery way. In these methods, decisions of whether a target object exists or not are made for part of or all of the sub-images in a test image. Besides the attractive performance and the extendibility of combining various kernels, these methods are favorable because they consider each object as a whole during detection. However, they share limited aspects with visual perception in human beings, and their efficiency heavily relies on the size of the test images.

The other methods [13, 5, 6, 18] detect objects based on the generalized Hough transform [1]. Object parts are detected, and the object parts provide confidence of the locations being the potential objects’ centers. Locations of objects are decided according to the converged confidence. These methods are favorable for their robustness to partial deformation and ease of training. To human beings, this kind of method seems to be more natural. And in our work, we combine a mechanism of visual perception in humans, with the ISM [13] to demonstrate this natural property.

A typical Hough transform based method contains two steps: training and detection. During training, a codebook of object parts is built from a set of well annotated images. Each code in the codebook contains information about the appearance of the object part, the relative position to the object center, and the class label. Each object part’s appearance is given in the form of keypoint descriptors [13], image patches [7, 19], or image regions [8]. Each code not only encodes one object part’s appearance, but also its offset to the object center and the class label. During the detection step, object parts are detected on each test image. Then every object part is matched against the codebook, and several codes nearest in appearance are activated. The offset and class label encoded in each activated code will act as a vote. All the votes from the object parts are added up to form a Hough image. The peaks of the Hough image are considered detection hypotheses with the height of each peak as the confidence for the corresponding hypothesis.

Two challenging issues for detection methods are how to separate near objects and how to separate similar different-class objects. The target objects, in the case of ITS applications, are pedestrians, bicycle riders, and automobiles. In the schema of sliding window, usually non-maximum suppression is needed for post-processing, and a mechanism in [10] works by excluding from the feature pool the features which belong to each successive detection response. In Hough transform based methods, a similar mechanism is also employed in [2], however, this effort is employed after the forming of a Hough image. During the forming of a Hough image, two kinds of votes make detection challenging: (1) votes cast by object parts from near objects make the peaks corresponding to different objects mixed up, and (2) votes cast by similar different-class object parts lead to tough decisions on the class label of the peaks. See Fig. 2d. Before the forming of Hough images, problems also arise from the pollution of the training images’ background part to the codebook. During training a very clean codebook can be built with the foreground marked, which requires manual efforts. Otherwise, a large amount of training examples are needed for the effectiveness of the codebook, and this decreases efficiency.

In videos, motion information is also available by simple tracking of object parts. Thus we propose a method for detection which utilizes both appearance and motion information. The method is based on the common fate principle [23]. The principle is one of the visual perception principles as theorized by gestalt psychologists, and it states that for human beings, tokens moving coherently are perceptually grouped. This provides an intuition to group the object parts by their motion patterns, and let them vote afterwards. In our work, the object parts are represented using keypoint descriptors, which are tracked to generate trajectories. The object parts are grouped by the pairwise similarities of their corresponding trajectories. Using the assumption that object parts in the same motion group probably belong to the same object, for each object part, we assign higher weights for the votes of the object parts which are more “agreeable” within the motion group. This results in votes corresponding to true detection responses to be more likely assigned higher weights. And on a Hough image formed by summing up these weighted votes, the peaks are easier to find as shown in Fig. 1d.

Fig. 1
figure 1

Merit of the proposed method. a Original image. b Motion grouping results. Some parts are enlarged to show details. c Original Hough image. d Hough image formed using our method. The grids in c and d correspond to the grids in a

Due to the combination of motion analysis results and the Hough transform framework, and by assigning different weights to each object part’s votes, the proposed method has several appealing properties:

  • The method’s ability to estimate object position and label multiple objects from different classes. The existence of three types of objects makes the task challenging: near objects, similar different-class objects, and multi-pose same-class objects.

  • Its ability to use a codebook trained by images with cluttered backgrounds.

  • The framework used to combine grouping results of object parts is very general, and thus can be easily expanded.

The remaining paper is organized as follows. Section 2 reviews related work. Section 3 formalizes the common fate Hough transform. Section 4 describes inference on the formed Hough images. Section 5 gives experimental results, and Section 6 concludes.

2 Related Work

Our work is most closely related to object detection methods [2, 12, 13, 14, 15, 16] based on the Hough transform framework. Recently, such methods are making a lot of progress. The ISM [13, 14] is extended by notifying correspondences between the object parts and the hypotheses [2] for the detection of multiple near objects. While in the methods [7, 15, 19], the Hough transform is placed in a discriminative framework for object detection in a way that the codes are assigned different weights by the co-occurrence frequency of their appearance and offset to the object center. Two Hough transform methods consider the grouping of object parts [20, 26]. The method in [20] deals with scale change. Instead of estimating the scale by local features trained from different scaled examples, the votes are considered as voting lines. By considering the difference between the voted centers, local features are first grouped, resulting in a more consistent vote for the object center. In [26], the grouping of object parts, the correspondence between object parts and object, and the decisions on detection hypotheses are optimized in the same energy function. For this method, the problem is that the grouping results don’t have meaning or correspond to any real entities.

Our work is also related to object detection methods which use trajectories [3, 4], methods using the weighting of features [25], methods dealing with codebook noise [17], and methods which integrate temporal information [24].

3 Common Fate Hough Transform

Probabilistic standpoints are very appealing because of inference ease. However, as pointed out in [11], placing an Implicit Shape Model (ISM) in a probabilistic framework is not satisfactory. Especially, describing weights of the votes as priors does not make sense. A Hough transform can be simply considered as the transformation from a set of object parts, {e}, to a confidence space of object hypotheses, C(x, l). Where x is the coordinate of the object center, and l the label. Terms described as priors of the votes in the ISM are actually weights, and the likelihood terms are actually blurring functions to convert discrete votes into continuous space. This section describes how a Hough image for the estimation of object centers and labels is formed from object parts observed on an image.

Let e denote an object part observed on the current image. The appearance of e is matched against the codebook, and e activates N best matched codes from the trained codebook. Each code contains the appearance, its offset to the object center, and the class label. According to the N matched codes, e casts N votes. Each vote V e is about the object center that generates e. The position of the object center casted by a vote, V, is denoted by x V , while the class label is l V . Based on the N votes of e, the confidence that a position \(\tilde {\mathbf {x}}\) is the center of an object with class label \(\tilde {l}\) is given by,

$$ C({\tilde{\mathbf{x}}},\tilde{l};{\mathbf{e}}) = \sum\limits^{N}_{i=1} {B\left({\tilde{\mathbf{x}}},{\tilde{l}};{V_{\mathbf{e}}^{i}}\right) w\left({V_{\mathbf{e}}^{i}}\right)}\:. $$
(1)

Here \(B\left ({\tilde {\mathbf {x}}},{\tilde {l}};{V_{\mathbf {e}}^{i}}\right )\) is the blurring function. And \(w\left ({V_{\mathbf {e}}^{i}}\right )\) is the weight of \({V_{\mathbf {e}}^{i}}\).

The idea of the proposed method is that the weight term, \(w\left ({V_{\mathbf {e}}^{i}}\right )\), is defined by the motion grouping results of all the object parts.

The blurring function is defined as,

$$ B(\tilde{\mathbf{x}},\tilde l;V) = \left\{ \begin{array}{*{20}{c}} 0 &\mbox{ if } {l_{V}} \ne \tilde{l} \mbox{~or~} |\tilde{\mathbf{x}} - {\mathbf{x}}_{V}| > d \\ G(\tilde{\mathbf{x}};{{\mathbf{x}}_{V}},\sigma) & \mbox{otherwise} \end{array} \right. . $$
(2)

Here \(G(\tilde {\mathbf {x}};{\mathbf {x}}_{V},\sigma )\) is a Gaussian function that fixes the spatial gap between \(\tilde {\mathbf {x}}\) and x V .

Let M be the total number of object parts on the image, then by summing up over all the object parts, the confidence of \(\tilde {\mathbf {x}}\) being the center of an \(\tilde {l}\)-class object is given by,

$$\begin{array}{@{}rcl@{}} C({\tilde{\mathbf{x}}},\tilde{l}) &=&\sum\limits^{M}_{j=1} C({\tilde{\mathbf{x}}},\tilde{l};{\mathbf{e}}_{j})w({\mathbf{e}}_{j}) \notag \\ &=&\sum\limits^{M}_{j=1} \sum\limits^{N}_{i=1} {B\left({\tilde{\mathbf{x}}},{\tilde{l}};{V_{{\mathbf{e}}_{j}}^{i}}\right) w\left({V_{{\mathbf{e}}_{j}}^{i}}\right)w({\mathbf{e}}_{j})} \:. \end{array} $$
(3)

A uniform weight is assumed for each object part, and \(w({\mathbf {e}}_{j})=\frac 1 M\). By considering \(C({\tilde {\mathbf {x}}},\tilde {l})\) as the evaluation score of the Hough space \(({\tilde {\mathbf {x}}},\tilde {l})\), the task of estimating object centers and labels converts to finding, and then validating, the local maxima of the Hough image.

3.1 Common Fate Weights

To meet the challenges of separating near objects, separating similar different-class objects, and using a noisy codebook, different weights are assigned to the votes of each object part by considering the motion grouping results of the object parts. In this sub-section, when given some grouping results, how the results are combined into a Hough transform framework is introduced.

Let γ = {g} denote the grouping results, where g is a group of object parts. Assume e m g and e n g. Those votes of e m which are more “agreeable” than the votes of the other objects in g are assigned larger weights.

Towards this end, the relationship between the votes of e m and the votes of e n needs to be given in advance. This relationship is named support. The support from \({V_{{{\mathbf {e}}_{n}}}}\) to \({V_{{{\mathbf {e}}_{m}}}}\) is defined based on \({V_{{{\mathbf {e}}_{n}}}}\) and the confidence that \({V_{{{\mathbf {e}}_{m}}}}\)’s voted center is correct, as,

$$S({V_{{{\mathbf{e}}_{n}}}} \to {V_{{{\mathbf{e}}_{m}}}}) = B({\mathbf{x}}_{V_{{{\mathbf{e}}_{m}}}},l_{V_{{{\mathbf{e}}_{m}}}};{V_{{{\mathbf{e}}_{n}}}})\:,n \ne m\:. $$

Here \(B({\mathbf {x}}_{V_{{{\mathbf {e}}_{m}}}},l_{V_{{{\mathbf {e}}_{m}}}};{V_{{{\mathbf {e}}_{n}}}})\) is defined in Eq. 2. This measures the coherence of the two votes from different object parts.

Then, the support from e n to \({V_{{{\mathbf {e}}_{m}}}}\) is defined based on e n , and the confidence that \({V_{{{\mathbf {e}}_{m}}}}\)’s voted center is correct, as,

$$\begin{array}{@{}rcl@{}} S({{{\mathbf{e}}_{n}} \to {V_{{{\mathbf{e}}_{m}}}}}) &=& C({\mathbf{x}}_{V_{{{\mathbf{e}}_{m}}}},l_{V_{{{\mathbf{e}}_{m}}}};{{{\mathbf{e}}_{n}}})\\ &=&\sum\limits^{N}_{i=1} {S\left({{V^{i}_{{{\mathbf{e}}_{n}}}} \to {V_{{{\mathbf{e}}_{m}}}}}\right)w\left(V^{i}_{{\mathbf{e}}_{n}}\right)}\:,n \ne m\:. \end{array} $$

And the support from g to \({V_{{{\mathbf {e}}_{m}}}}\) is defined by the confidence that \({V_{{{\mathbf {e}}_{m}}}}\)’s voted center is correct based on the votes of all the other object parts excluding its belonging object part in g, as,

$$\begin{array}{@{}rcl@{}} S({\mathbf{g}} \to V_{{\mathbf{e}}_{m}}) &=& \sum\limits_{{{\mathbf{e}}_{i}} \in {\mathbf{g}}- \{ {{\mathbf{e}}_{m}} \} }{C({\mathbf{x}}_{V_{{{\mathbf{e}}_{i}}}},l_{V_{{{\mathbf{e}}_{m}}}};{{{\mathbf{e}}_{i}}})w({{\mathbf{e}}_{i}})}\\ & =&\frac 1 M \sum\limits_{{{\mathbf{e}}_{i}} \in {\mathbf{g}}- \{ {{\mathbf{e}}_{m}} \}} {S({{{\mathbf{e}}_{i}} \to {V_{{{\mathbf{e}}_{m}}}}})} \:. \end{array} $$

By assuming all object parts in the same motion group are from the same object, which means motion grouping gives good results, the estimations for center position and class label given by every object part should be consistent with that given by the motion group. Thus for a particular vote of e m , i.e., \({\tilde {V}}_{{\mathbf {e}}_{m}}\), a weight is assigned to it by considering its consistence with g and the consistence of e m ’s other votes with g, as

$$\begin{array}{@{}rcl@{}} w\left({\tilde{V}}_{{\mathbf{e}}_{m}}\right) &=& \frac{S\left({\mathbf{g}} \to {{\tilde{V}}_{{\mathbf{e}}_{m}}}\right) + \frac{\Delta }{N}}{\sum\limits^{N}_{i=1}{ S\left({\mathbf{g}} \to {V^{i}_{{{\mathbf{e}}_{m}}}}\right)} + \Delta } \notag \\ &=& \frac{\sum\limits_{{{\mathbf{e}}_{j}} \in {\mathbf{g}}- \{ {{\mathbf{e}}_{m}} \}} { \sum\limits^{N}_{k=1} {S\left({{V^{k}_{{{\mathbf{e}}_{j}}}} \to {{\tilde{V}}_{{{\mathbf{e}}_{m}}}}}\right)w\left(V^{k}_{{\mathbf{e}}_{j}}\right)}} + \frac{M \Delta }{N}}{\sum\limits^{N}_{i=1}{ \sum\limits_{{{\mathbf{e}}_{j}} \in {\mathbf{g}}- \{ {{\mathbf{e}}_{m}} \}} { \sum\limits^{N}_{k=1} {S\left({{V^{k}_{{{\mathbf{e}}_{j}}}} \to {V^{i}_{{{\mathbf{e}}_{m}}}}}\right)w(V^{k}_{{\mathbf{e}}_{j}})} } } + M\Delta }\:. \end{array} $$
(4)

Here, Δ is a small constant for preventing zeros. Notice \(w\left ({\tilde {V}}_{{\mathbf {e}}_{m}}\right )\) is defined using \(w\left (V^{k}_{{\mathbf {e}}_{j}}\right )\) - the weights of the votes of the other object parts in g. In order to determine \(w({\tilde {V}}_{{\mathbf {e}}_{m}})\), uniform weights are firstly assigned to the votes of each object part in g, i.e., \(w\left (V^{k}_{{\mathbf {e}}_{j}}\right )=\frac {1}{N}\). Then new weights are calculated based on the uniformly assigned weights. The weights of votes used to form the Hough image are the iteratively converged weights.

The grouping result γ = {g}, can be replaced by grouping results based on other information, for example our method utilizes motion to group the voting elements. The manner of extending the Hough transform is very general, and the extended Hough transform with motion grouping results is called the common fate Hough transform. The votes given by the best matched codes and the votes with higher defined weights are shown in Fig. 2.

Fig. 2
figure 2

Effect of the proposed weight. a Motion groups, different colors mark different motion groups. b Voted centers given by the 7 best matched codes. c Voted centers with the highest defined weights. d Voted centers with weights higher than a threshold

3.2 Motion Grouping

In this subsection, how to group the object parts by their motion patterns is introduced. Basically the object parts are tracked, and clustered by their motion patterns. The object parts are tracked through frames before and after the current frame, to generate trajectories. Then the object parts are grouped by their corresponding trajectories’ pairwise motion similarities.

The object parts in this method are in the form of keypoint descriptors. The Harris Corner [9] feature is chosen, for robustness, to represent each object part, while for appearance, the region covariance [22] feature of the image patch around each keypoint is used. The image feature is chosen because of its flexibility to combine multiple channels of information, and also for its capability of handling scale changes in a certain range.

For each object part, a trajectory is generated by tracking its corresponding Harris Corner by the KLT tracker [21]. To group the trajectories, two pairwise similarities are defined.

Let \({T_{{{\mathbf {e}}_{m}}}}\) and \({T_{{{\mathbf {e}}_{n}}}}\) denote two trajectories corresponding to e m and e n . The first similarity between two trajectories is defined as,

$$ {D_{1}}\left(T_{{\mathbf{e}}_{m}},T_{{\mathbf{e}}_{n}}\right) = \mathop {\max }\limits_{i=1...L} \left(\left|{\mathbf{x}}^{i}_{T_{{\mathbf{e}}_{m}}}-{\mathbf{x}}^{i}_{T_{{\mathbf{e}}_{n}}}\right|\right)\:. $$

Here, i is the frame index, and L is the number of frames in which both trajectories exist.

To define the second similarity, the ith directional vector of T is firstly defined as, \({\mathbf {d}}^{i}_{T}={\mathbf {x}}^{i+3}_{T}-{\mathbf {x}}^{i}_{T}\). Let \({\mathbf {a}}_{i}={\mathbf {d}}^{i}_{T_{{\mathbf {e}}_{m}}}\), \({\mathbf {b}}_{i}={\mathbf {d}}^{i}_{T_{{\mathbf {e}}_{n}}}\), \(a_{i}=\frac {{{{\mathbf {a}}_{i}}\cdot {{\mathbf {b}}_{i}}}}{{{{\mathbf {a}}_{i}}\cdot {{\mathbf {a}}_{i}}}}\), and \(b_{i}=\frac {{{{\mathbf {a}}_{i}}\cdot {{\mathbf {b}}_{i}}}}{{{{\mathbf {b}}_{i}}\cdot {{\mathbf {b}}_{i}}}}\). Then the second similarity is defined as,

$${D_{2}}(T_{{\mathbf{e}}_{m}},T_{{\mathbf{e}}_{n}})= \mathop {\max }\limits_{i=1...L-3} (\max (|{{\mathbf{a}}_{i}} - a_{i} {{\mathbf{a}}_{i}}|,|{{\mathbf{b}}_{i}} - b_{i}{{\mathbf{b}}_{i}}|))\:. $$

Before grouping the trajectories, the static points are excluded. The defined D 1 is calculated for all pairs of trajectories, and a minimum spanning tree is then built using the calculated similarities. The built minimum spanning tree is split by cutting edges larger than a threshold, \(D^{1}_{th}\), and this gives a grouping result of the trajectories. For each element in the clustering result, D 2 is used in the same procedure to generate even smaller clusters. This hierarchical procedure ensures that trajectories in the same group have both small D 1 and D 2. Max operation is used in the definitions of both D 1 and D 2. This is helpful because very often two trajectories are of different lengths, and under such situations, max operation will have better stability than other operations, e.g., average, that consider only overlapping frames.

Each trajectory corresponds to an object part, and the grouping results of the trajectories correspond to grouping results of the object parts.

3.3 Codebook

For training, Harris corners are extracted from the training images with the object center and the class label annotated. In this method, region covariance is chosen to represent the appearance, which is defined as,

$${\mathbf{r}} = \frac{1}{{K - 1}}\sum\limits_{i = 1}^{K} {({{\mathbf{z}}_{i}} - {\mathbf{\mu }}){{({{\mathbf{z}}_{i}} - {\mathbf{\mu }})}^{T}}} \;. $$

Here, K is the number of pixels in the region, and z i is a 7-dimensional vector regarding the (x, y) coordinate of the pixel, while μ is the mean of z i . And z(x, y) contains the RGB color of the pixel and the intensity gradients of the pixel, as: r(x, y), g(x, y), b(x, y), \(\left |\frac {\partial I(x,y)} {\partial x}\right |\), \(\left |\frac {\partial I(x,y)}{\partial y}\right |\), \(\left |\frac {{\partial ^{2}}I(x,y)}{\partial {x^{2}}}\right |\), and \(\left |\frac {{\partial ^{2}}I(x,y)}{\partial {y^{2}}}\right |\).

The appearance similarity between r m and r n is given by,

$$\rho ({\mathbf{r}}_{m},{\mathbf{r}}_{n}) = \sqrt {\sum\limits_{i = 1}^{7} {{{\ln }^{2}}{\lambda_{i}}} }\;. $$

Here, λ i is the generalized eigenvalue obtained by solving the generalized eigenvalue problem, λ i r m u i = r n u i , u i 0, with u i the eigenvector.

A square image patch around each keypoint is used to represent the appearance of an object part. Six region co- variances are generated for each image patch by using the pixels of the top-left, the top-right, the bottom-left, the bottom-right, the central portion, and the entire image patch. Then besides the offset and the class label, a code contains six region covariances. All codes from all training images constitute the codebook. When an object part is matched against the codebook, the similarity between the image patch of the object part and a code is defined by the smallest similarity of the corresponding region covariance. This will handle scale changes of a small range, since the six image patches are not of the same scale. And the method’s ability of handling scale changes is limited. So it can only be used in surveillance situations where the scales of target objects change in a limited range.

4 Detection

After forming the Hough image, the detection hypotheses are validated. Let h = {H} be the points in the Hough space which are evaluated by C(x H , l H ) and have C(x H , l H ) > 0. Inspired by [2], the hypotheses are validated by an optimizing procedure. Let O be the number of the points in h. Let u i = 1 or 0, indicate H i as a true object center or not. The problem is:

$$\arg \max\limits_{u_{i}} \prod\limits_{i = 1}^{O} { C^{u_{i}}({H_{i}})} \Longleftrightarrow\arg \max\limits_{u_{i}} \sum\limits_{i = 1}^{O} {{u_{i}}\ln (C({H_{i}})} )\:. $$

Let v i j = 1 or 0, indicate e j belongs to H i or not, then

$$\begin{array}{@{}rcl@{}} C(H_{i})&=&\sum\limits^{M}_{j=1} C({\mathbf{x}}_{H_{i}},l_{H_{i}};{\mathbf{e}}_{j})w({\mathbf{e}}_{j})\\ &=&\frac 1 M \sum\limits^{M}_{j=1} v_{ij} C({\mathbf{x}}_{H_{i}},l_{H_{i}};{\mathbf{e}}_{j})\:, \end{array} $$

and by assuming one object part belongs to and only belongs to one hypothesis, the problem is,

$$\begin{array}{@{}rcl@{}} &&{}\arg \max\limits_{u_{i},v_{ij}} \sum\limits_{i = 1}^{O} {u_{i}}\ln \left(\sum\limits^{M}_{j=1} v_{ij} C({\mathbf{x}}_{H_{i}},l_{H_{i}};{\mathbf{e}}_{j}) \right)\\ &&{} \begin{array}{rl} s.t.:\mbox{ }&u_{i}=0\mbox{ or }u_{i}=1,\forall\;i;\\ &v_{ij}=0\mbox{ or }v_{ij}=1,\forall\;i,\forall\;j;\\ &\sum\limits_{i = 1}^{O} {v_{ij}}=1,\forall\;j;\;\; \\ &\sum\limits_{j = 1}^{M} {v_{ij}}\leq u_{i},\forall\;i\:. \end{array} \end{array} $$

Following [2], the optimal result for the problem is given by greedy maximization. As described in Algorithm 1, the largest local maximum of all the local maxima is chosen to be the center of a true object, and then the object parts belonging to the chosen object center are excluded from the object part set. A new Hough image, where new objects are found, is formed using the remaining object parts. And this procedure ends when the object part set is empty, or when the confidence of the chosen object is lower than a given threshold.

5 Experimental Results

In our experiments, improvement of the method is verified in terms of detection accuracy. The method is tested on the P-campus dataset with [2] as benchmark, and then tested on a dataset of several animals.

5.1 Campus-scene Detection

Dataset

The P-campus dataset contains two primary classes of foreground objects: pedestrians and bicycle riders. The frame size is 720 × 576. Among all the 401 continuous frames, 633 different-class ground truth bounding boxes are annotated on 79 frames. In this dataset, pedestrians and bicycle riders have in common the upper human body, and pedestrians appear in front, back, and side views.

Implementation Settings

For training, 52 bicycle riders and 171 pedestrians are randomly selected from the marked ground truths. Harris corners are detected on these randomly selected training images, examples are given in Fig. 3a. For appearance, six region covariances are generated for each keypoint using the 9 × 9 image patch around it as shown in Fig. 3b. The appearance, the offset to the image (object) center, and the label of the training image are encoded into a code, and the code is inserted into a codebook. The final codebook contains 5502 codes. Testing data is formed by the 79 frames, on which the ground truth bounding boxes are marked. Harris corners are detected, and region covariances are generated in the same manner as for the training images. For each Harris corner on one testing image, the corresponding region covariances are matched against the codebook for the most similar codes. Some of the training examples will appear in the test sequences. The emphasis of this experiment is to verify the proposed framework’s ability of combining motion information. Both the proposed method and the benchmark method use the same training and testing images, so the comparison is fair and proves the effectiveness of the proposed method.

Fig. 3
figure 3

a Training images. Note some keypoints fall on the background. b The manner how a 9 × 9 image patch is used to generate six region covariances, and red rectangles indicate the pixels used for each covariance

For motion grouping, each keypoint is tracked through 10 frames before, and 10 frames after the current frame. The similarity of two 21-point trajectories is defined using only the frames in which both trajectories exist. To set the two thresholds for motion grouping, D 1 and D 2 are measured for keypoint pairs of different objects. \(D^{1}_{th}\) is set so that it is larger than only 10% of the measured D 1s, and so is \(D^{2}_{th}\). By doing this, keypoints belonging to different objects are not likely to be grouped together. So that in one motion group, the keypoints are very likely to belong to the same object, as shown in Fig. 4.

Fig. 4
figure 4

Motion grouping results

To form the Hough image, 35 best matched codes are chosen from the codebook for each object part. In Eq. 3, d and σ need to be given. The precision-recall curves are based on σ, while d is set to 10. Here σ is the most important parameter.

Comparisons

For comparison, detection is done on the Hough images formed with and without motion grouping results. The same codebook and the same parameter settings are used for forming and searching over both Hough images. The votes of each object part are assigned uniform weights in the benchmark method, while weights defined in Eq. 4 are assigned in the proposed method.

The precision-recall curves are shown in Fig. 5a. An object is considered as correctly detected only if the distance from the ground truth to it is less than 10 pixels. In Fig. 5a, the correctly positioned but wrongly labeled objects are considered as true positives, aiming at verifying the positioning ability of the proposed method.

Fig. 5
figure 5

a Precision-recall curves (red: the proposed method, blue: the benchmark method). b Confusion matrices (upper: the proposed method, lower: the benchmark method)

The confusion matrices are given in Fig. 5b. For clarity, the proposed method is compared with the benchmark method when the two methods have a nearly equal number of false alarms. To evaluate the labeling ability, a class of “none” to represent missed detections and false alarms is manually added. For example, in Fig. 5b, 487 pedestrian instances are correctly positioned and labeled by the proposed method; 2 are wrongly labeled to be bicycle riders, and 21 are miss-detected. More results are shown in Fig. 6.

Fig. 6
figure 6

Results. Red rectangles and blue rectangles mark correctly detected pedestrians and bicycle riders. Yellow rectangles mark missed detections. White rectangles mark correctly detected but not correctly labeled objects. Green rectangles mark false alarms. Black rectangles mark static objects which are beyond the verification of this method

5.2 Wild-scene Detection

Dataset

In order to show that our method can be used for general purposes, we test our method on complicated scenes, especially, complicated background. Even in these cases, our method works well, which shows robustness of our method. A mini dataset is built upon leopards and tigers of the family Felidae. Note especially that the image feature used by this method belongs to the type texture, and texture from different positions of the leopards are almost the same. The dataset contains 6 video clips of 9 leopards and 4 tigers. The frame size is 640 × 480. Both of the animals are in the side view.

Implementation settings

Most implementation settings are the same as the settings used for campus object detection. For training, 5 leopards and 2 tigers are used. The size of the image patch around each keypoint is 27 × 27.

Comparisons

In Fig. 7, the motion grouping results, and how the voted centers are affected, are given. Since parts from different positions of the leopard are very similar, the true center of a leopard is difficult to find using the voted centers of the object parts. In Fig. 8, example Hough images are given to show the merit of the proposed prior by the ability to detect leopards. In Fig. 9, the detection results are given. The proposed method successfully localizes and labels all the leopards and tigers, while the benchmark method miss-detects three leopards.

Fig. 7
figure 7

Effect of the proposed weight assignment. Red circles are voted centers for leopards, while blue ones are voted centers for tigers. On the top are the motion grouping results. In the middle are the voted centers according to the best matched codes. On the bottom are the voted centers voted by votes with the highest weights

Fig. 8
figure 8

Example Hough images. On the top are the original images. In the middle are the Hough images formed by votes with uniform priors. On the bottom are the Hough images formed by votes with the proposed weights. Red indicates leopards, and blue indicates tigers. Note that for the two leopards, there is no peak corresponding to the one on the right, on the benchmark Hough image. For the three leopards, there is also no peak corresponding to the leopard behind on the benchmark Hough image

Fig. 9
figure 9

Results. Red crosses mark the centers for leopards and blue crosses mark the centers for tigers

6 Conclusion

The computational ability of human beings is limited, while their ability to detect is far beyond that of machines. Thus, it is very possible that this detection ability benefits from multiple perceptual mechanisms. By using one of these mechanisms, we propose a detection method. By embedding motion grouping results into the voting schema of hough transforms, the method is able to distinguish near objects’ positions, distinguish similar objects’ labels, and maintain the detection rate with a noisy codebook. The success of our method further demonstrates the advancement of perceptual mechanisms in human beings. And the success of this method will help with detection methods in ITS areas.