Keywords

1 Introduction

The goal of person re-identification is to identify the same person across videos captured from different cameras. It is a fundamental visual recognition problem in video surveillance with various applications [55]. It is challenging because the camera views are usually disjoint, the temporal transition time between cameras varies considerably, and the lighting conditions/person poses differ across cameras in real-world scenarios.

Fig. 1.
figure 1

(a, b) As a person appears in different poses/viewpoints in different cameras, and (c) human detections are imperfect, the corresponding body parts are usually not spatially aligned across the human detections, causing person re-identification to be challenging.

Body part misalignment (i.e., the problem that body parts are spatially misaligned across person images) is one of the key challenges in person re-identification. Figure 1 shows some examples. This problem causes conventional strip/grid-based representations [1, 10, 25, 58, 69, 71] to be unreliable as they implicitly assume that every person appears in a similar pose within a tightly surrounded bounding box. Thus, a body part-aligned representation, which can ease the representation comparison and avoid the need for complex comparison techniques, should be designed.

To resolve this problem, recent approaches have attempted to localize body parts explicitly and combine the representations over them [23, 50, 74, 75, 78]. For example, the body parts are represented by the pre-defined (or refined [50]) bounding boxes estimated from the state-of-the-art pose estimators [4, 50, 74, 78]. This scheme requires highly-accurate pose estimation. Unfortunately, state-of-the-art pose estimation solutions are still not perfect. Also, these schemes are bounding box-based and lack fine-grained part localization within the boxes. To mitigate the problems, we propose to encode human poses by feature maps rather than by bounding boxes. Recently, Zhao et al. [75] represented body parts through confidence maps, which are estimated using attention techniques. The method has a lack of guidance on body part locations during the training, thereby failing to attend to certain body regions consistently.

In this paper, we propose a part-aligned representation for person re-identification. Our approach learns to represent the human poses as part maps and combine them directly with the appearance maps to compute part-aligned representations. More precisely, our model consists of a two-stream network and an aggregation module. (1) Each stream separately generates appearance and body part maps. (2) The aggregation module first generates the part-aligned feature maps by computing the bilinear mapping of the appearance and part descriptors at each location, and then spatially averages the local part-aligned descriptors. The resulting image matching similarity is equivalent to an aggregation of the local appearance similarities of the corresponding body parts. Since it does not depend on the relative positions of parts, the misalignment problem is reduced.

Training the network does not require any body part annotations on the person re-identification dataset. Instead, we simply initialize the part map generation stream using the pre-trained weights, which are trained from a standard pose estimation dataset. Surprisingly, although our approach only optimizes the re-identification loss function, the resulting two-stream network successfully separates appearance and part information into each stream, thereby generating the appearance and part maps from each of them, respectively. In particular, the part maps adapt from the original form to further differentiate informative body parts for person re-identification. Through extensive experiments, we verify that our approach consistently improves the accuracy of the baseline and achieves competitive/superior performance over standard image datasets, Market-1501, CUHK03, CUHK01 and DukeMTMC, and one standard video dataset, MARS.

2 Related Work

The early solutions of person re-identification mainly relied on hand-crafted features [18, 27, 36, 39], metric learning techniques [20, 22, 26, 28, 42, 70, 72], and probabilistic patch matching algorithms [5, 6, 48] to handle resolution/light/view/pose changes. Recently, attributes [51, 52, 76], transfer learning [43, 49], re-ranking [15, 80], partial person matching [82], and human-in-the-loop learning [38, 60], have also been studied. More can be found in the survey [81]. In the following, we review recent spatial-partition-based and part-aligned representations, matching techniques, and some works using bilinear pooling.

Regular Spatial-Partition Based Representations. The approaches in this stream of research represent an image as a combination of local descriptors, where each local descriptor represents a spatial partition such as grid cell [1, 25, 71] and horizontal stripe [10, 58, 69]. They work well under a strict assumption that the location of each body part is consistent across images. This assumption is often violated under realistic conditions, thereby causing the methods to fail. An extreme case is that no spatial partition is used and a global representation is computed over the whole image [7, 42, 63,64,65, 77].

Body Part-Aligned Representations. Body part and pose detection results have been exploited for person re-identification to handle the body part misalignment problem [3, 11,12,13, 62, 68]. Recently, these ideas have been re-studied using deep learning techniques. Most approaches [50, 74, 78] represent an image as a combination of body part descriptors, where a dozen of pre-defined body parts are detected using the off-the-shelf pose estimator (possibly an additional RoI refinement step). They usually crop bounding boxes around the detected body parts and compute the representations over the cropped boxes. In contrast, we propose part-map-based representations, which is different from the previously used box-based representations [50, 74, 78].

Tang et al. [55] also introduced part maps for person re-identification to solve the multi-people tracking problem. They used part maps to augment appearances as another feature, rather than to generate part-aligned representations, which is different from our method. Some works [34, 75] proposed the use of attention maps, which are expected to attend to informative body parts. They often fail to produce reliable attentions as the attention maps are estimated from the appearance maps; guidance from body part locations is lacking, resulting in a limited performance.

Matching. The simple similarity functions [10, 58, 69], e.g., cosine similarity or Euclidean distance, have been adapted, for part-aligned representations, such as our approach, or under an assumption that the representations are body part/pose aligned. Various schemes [1, 25, 59, 71] were designed to eliminate the influence from body part misalignment for spatial partition-based representations. For instance, a matching sub-network was proposed to conduct convolution and max-pooling operations, over the differences [1] or the concatenation [25, 71] of grid-based representation of a pair of person images. Varior et al. [57] proposed the use of matching maps in the intermediate features to guide feature extraction in the later layers through a gated CNN.

Bilinear Pooling. Bilinear pooling is a scheme to aggregate two different types of feature maps by using the outer product at each location and spatial pooling them to obtain a global descriptor. This strategy has been widely adopted in fine-grained recognition [14, 21, 30] and showed promising performance. For person re-identification, Ustinova et al. [56] adopted a bilinear pooling to aggregate two different appearance maps; this method does not generate part-aligned representations and leads to poor performance. Our approach uses a bilinear pooling to aggregate appearance and part maps to compute part-aligned representations.

Fig. 2.
figure 2

Overview of the proposed model. The model consists of a two-stream network and an aggregator (bilinear pooling). For a given image \(\mathbf {I}\), the appearance and part map extractors, \(\mathcal {A}\) and \(\mathcal {P}\), generate the appearance and part maps, \(\mathbf {A}\) and \(\mathbf {P}\), respectively. The aggregator performs bilinear pooling over \(\mathbf {A}\) and \(\mathbf {P}\) and generates a feature vector \(\mathbf {f}\). Finally, the feature vector is \(l_2\)-normalized, resulting in a final part-aligned representation \(\mathbf {\tilde{f}}\). Conv and BN denote the convolution and batch normalization layers, respectively.

3 Our Approach

The proposed model consists of a two-stream network and an aggregation module. It receives an image \(\mathbf {I}\) as an input and outputs a part-aligned feature representation \(\tilde{\mathbf {f}}\) as illustrated in Fig. 2. The two-stream network contains two separate sub-networks, the appearance map extractor \(\mathcal {A}\) and the part map extractor \(\mathcal {P}\), which extract the appearance map \(\mathbf {A}\) and part map P, respectively. The two types of maps are aggregated through bilinear pooling to generate the part-aligned feature \(\mathbf {f}\), which is subsequently normalized to generate the final feature vector \(\tilde{\mathbf {f}}\).

3.1 Two-Stream Network

Appearance Map Extractor. We feed an input image \(\mathbf {I}\) into the appearance map extractor \(\mathcal {A}\), thereby outputting the appearance map \(\mathbf {A}\):

$$\begin{aligned} \mathbf {A} = \mathcal {A}(\mathbf {I}). \end{aligned}$$
(1)

\(\mathbf {A} \in \mathbb {R}^{h \times w \times c_A}\) is a feature map of size \(h \times w\), where each location is described by \(c_A\)-dimensional local appearance descriptor. We use the sub-network of GoogLeNet [54] to form and initialize \(\mathcal {A}\).

Part Map Extractor. The part map extractor \(\mathcal {P}\) receives an input image \(\mathbf {I}\) and outputs the part map \(\mathbf {P}\):

$$\begin{aligned} \mathbf {P}=\mathcal {P}(\mathbf {I}). \end{aligned}$$
(2)

\(\mathbf {P}\in \mathbb {R}^{h\times w \times c_P}\) is a feature map of size \(h \times w\), where each location is described by a \(c_P\)-dimensional local part descriptor. Considering the rapid progress in pose estimation, we use the sub-network of the pose estimation network, OpenPose [4], to form and initialize \(\mathcal {P}\). We denote the sub-network of the OpenPose as \(\mathcal {P}_{pose}\).

3.2 Bilinear Pooling

Let \(\mathbf {a}_{xy}\) be the appearance descriptor at the position (xy) from the appearance map \(\mathbf {A}\), and \(\mathbf {p}_{xy}\) be the part descriptor at the position (xy) from the part map \(\mathbf {P}\). We perform bilinear pooling over \(\mathbf {A}\) and \(\mathbf {P}\) to compute the part-aligned representation \(\mathbf {f}\). There are two steps, bilinear transformation and spatial global pooling, which are mathematically given as follows:

$$\begin{aligned} \mathbf {f} = {\text {pooling}}_{xy}\{\mathbf {f}_{xy}\} = \frac{1}{S} \sum _{xy}\mathbf {f}_{xy}, \quad \quad \mathbf {f}_{xy} = {\text {vec}} (\mathbf {a}_{xy}\otimes \mathbf {p}_{xy}), \end{aligned}$$
(3)

where S is the spatial size. The pooling operation we use here is average-pooling. \({\text {vec}}(.)\) transforms a matrix to a vector, and \(\otimes \) represents the outer product of two vectors, with the output being a matrix. The part-aligned feature \(\mathbf {f}\) is then normalized to generate the final feature vector \(\tilde{\mathbf {f}}\) as follows:

$$\begin{aligned} \tilde{\mathbf {f}} = \frac{\mathbf {f}}{\Vert \mathbf {f} \Vert _2}. \end{aligned}$$
(4)

Considering the normalization, we denote the normalized part-aligned representation as \(\tilde{\mathbf {f}}_{xy} = {\text {vec}}(\tilde{\mathbf {a}}_{xy}\otimes \tilde{\mathbf {p}}_{xy})\), where \(\tilde{\mathbf {a}}_{xy} = \frac{\mathbf {a}_{xy}}{\sqrt{\Vert \mathbf {f}\Vert _2}}\) and \(\tilde{\mathbf {p}}_{xy} = \frac{\mathbf {p}_{xy}}{\sqrt{\Vert \mathbf {f}\Vert _2}}\). Therefore, \(\tilde{\mathbf {f}} = \frac{1}{S}\sum _{xy}{\tilde{\mathbf {f}}_{xy}}\).

Part-Aligned Interpretation. We can decompose \(\mathbf {a} \otimes \mathbf {p}\)Footnote 1 into \(c_P\) components:

$$\begin{aligned} {\text {vec}}(\mathbf {a} \otimes \mathbf {p}) = [(p_1\mathbf {a})^\top ~ (p_2\mathbf {a})^\top ~ \dots (p_{c_P}\mathbf {a})^\top ]^\top , \end{aligned}$$
(5)

where each sub-vector \(p_i \mathbf {a}\) corresponds to a i-th part channel. For example, if \(p_{knee}=1\) on knee and 0 otherwise, then \(p_{knee} \mathbf {a}\) becomes \(\mathbf {a}\) only on the knee and \(\mathbf {0}\) otherwise. Thus, we call \({\text {vec}}(\mathbf {a}\otimes \mathbf {p})\) as part-aligned representation. In general, each channel c does not necessarily correspond to a certain body part. However, the part-aligned representation remains valid as \(\mathbf {p}\) encodes the body part information. Section 4 describes this interpretation in detail.

3.3 Loss

To train the network, we utilize the widely-used triplet loss function. Let \(\mathbf {I}_q\), \(\mathbf {I}_p\) and \(\mathbf {I}_n\) denote the query, positive and negative images, respectively. Then, \((\mathbf {I}_q, \mathbf {I}_p)\) is a pair of images of the same person, and \((\mathbf {I}_q, \mathbf {I}_n)\) is that of different persons. Let \(\tilde{\mathbf {f}}_q\), \(\tilde{\mathbf {f}}_p\), and \(\tilde{\mathbf {f}}_n\) indicate their representations. The triplet loss function is formulated as

$$\begin{aligned} \ell _{{\text {triplet}}}(\tilde{\mathbf {f}}_q,\tilde{\mathbf {f}}_p,\tilde{\mathbf {f}}_n) = \max (m + {\text {sim}}(\tilde{\mathbf {f}}_q,\tilde{\mathbf {f}}_n) - {\text {sim}}(\tilde{\mathbf {f}}_q,\tilde{\mathbf {f}}_p), 0), \end{aligned}$$
(6)

where m denotes a margin and \({\text {sim}}(\mathbf {x}, \mathbf {y}) = {<}\mathbf {x}, \mathbf {y}{>}\). The margin is empirically set as \(m = 0.2\). The overall loss function is written as follows.

$$\begin{aligned} \mathcal {L} = \frac{1}{|\mathcal {T}|}\sum \nolimits _{(\mathbf {I}_q, \mathbf {I}_p, \mathbf {I}_n) \in \mathcal {T}}\ell _{{\text {triplet}}}(\tilde{\mathbf {f}}_q,\tilde{\mathbf {f}}_p,\tilde{\mathbf {f}}_n), \end{aligned}$$
(7)

where \(\mathcal {T}\) is the set of all triplets, \(\{(\mathbf {I}_q, \mathbf {I}_p, \mathbf {I}_n)\}\).

4 Analysis

Part-Aware Image Similarity. We show that under the proposed part-aligned representation in Eqs. (3) and (4), the similarity between two images is equivalent to the aggregation of local appearance similarities between the corresponding body parts. The similarity between two images can be represented as the sum of local similarities between every pair of locations as follows.

$$\begin{aligned} {\text {sim}}_I(\mathbf {I}, \mathbf {I}') ={<}\tilde{\mathbf {f}}, \tilde{\mathbf {f}}'{>} \nonumber =\,&\frac{1}{S^2}{<}\sum _{xy}\tilde{\mathbf {f}}_{xy}, \sum _{x'y'}\tilde{\mathbf {f}}'_{x'y'}{>} \nonumber \\ =\,&\frac{1}{S^2}\sum _{xy} \sum _{x'y'} {<}\tilde{\mathbf {f}}_{xy} , \tilde{\mathbf {f}}'_{x'y'} {>} \nonumber \\ =\,&\frac{1}{S^2}\sum _{xy} \sum _{x'y'} {\text {sim}}( \tilde{\mathbf {f}}_{xy}, \tilde{\mathbf {f}}'_{x'y'}), \end{aligned}$$
(8)

where \({\text {sim}}_I(,)\) measures the similarity between images. Here, the local similarity is computed by an inner product:

(9)

This local similarity can be interpreted as the appearance similarity weighted by the body part similarity or vice versa. Thus, from Eqs. (8) and (9), the similarity between two images is computed as the average of local appearance similarities weighted by the body part similarities at the corresponding positions:

$$\begin{aligned} {\text {sim}}_I(\mathbf {I}, \mathbf {I}') = \frac{1}{S^2}\sum _{xyx'y'}{ {\text {sim}}(\tilde{\mathbf {a}}_{xy}, \tilde{\mathbf {a}}'_{x'y'}) {\text {sim}}(\tilde{\mathbf {p}}_{xy}, \tilde{\mathbf {p}}'_{x'y'})}. \nonumber \end{aligned}$$

As a result, the image similarity does not depend on the relative positions of parts in images, and therefore the misalignment problem is reduced. To make the local part similarity to be always non-negative and therefore the sign of the local similarity depends only on the sign of the local appearance similarity, we can also restrict the part descriptors \(\mathbf {p}_{xy}\) to be element-wise non-negative by adding a ReLU layer after the part map extractor \(\mathcal {P}\) as shown in Fig. 2. As this variant results in similar accuracy to the original one, we used the model without the ReLU layer for all the experiments. See supplementary material for more details.

Relationship to the Baseline Models. Consider a baseline approach that only uses the appearance maps and spatial global pooling for image representation. Then, the image similarity is computed as \({\text {sim}}_I(\mathbf {I}, \mathbf {I}') = \frac{1}{S^2}\sum _{xyx'y'}{ {\text {sim}}(\tilde{\mathbf {a}}_{xy}, \tilde{\mathbf {a}}'_{x'y'})}\). Unlike our model, this approach cannot reflect part similarity. Consider another model based on the box-based representation, which represents an image as a concatenation of K body part descriptors, where k-th body part is represented as the average-pooled appearance feature within the corresponding bounding box. This model is equivalent to our model when \(\mathbf {p}_{xy}\) is defined as \(\mathbf {p}_{xy}=[\delta [(x,y)\in R_{1}], \cdots , \delta [(x,y)\in R_{K}]]\), where \(R_k\) is the region within the k-th part bounding box and \(\delta [\cdot ]\) is an indicator function, i.e., \(\delta [x] = 1\) if x is true otherwise 0. Because our model contains these baselines as special cases and is trained to optimize the re-identification loss, it is guaranteed to perform better than them.

Fig. 3.
figure 3

The t-SNE visualization of the normalized local appearance and part descriptors on the Market-1501 dataset. It illustrates that our two-stream network decomposes the appearance and part information into two streams successfully. (a) Appearance descriptors are clustered roughly by colors, independently from the body parts where they came from. (b) Part descriptors are clustered by body parts where they came from, regardless of the colors. (Best viewed on a monitor when zoomed in) (Color figure online)

Fig. 4.
figure 4

Visualization of the appearance maps \(\mathbf {A}\) and part maps \(\mathbf {P}\) obtained from the proposed method. For a given input image (left), appearance (center) and part (right) maps encode the appearance and body parts, respectively. For both appearance and part maps, the same color implies that the descriptors are similar, whereas different colors indicate that the descriptors are different. The appearance maps share similar color patterns among the images from the same person, which means that the patterns of appearance descriptors are similar as well. In the part maps, the color differs depending on the location of the body parts where the descriptors came from. (Best viewed in color)

Fig. 5.
figure 5

Comparing the body part descriptors. For a given image (left), the conventional joint-based (center) and the proposed (right) descriptors are visualized. (Best viewed in color) (Color figure online)

The Two-Stream Network Yields a Decomposed Appearance and Part Maps. At the beginning of the training, the two streams of the network mainly represent the appearance and part maps because the appearance map extractor \(\mathcal {A}\) and the part map extractor \(\mathcal {P}\) are initialized using GoogleNet [54] pre-trained on ImageNet [46] and OpenPose [4] model pre-trained on COCO [29], respectively. During training, we do not set any constraints on the two streams, i.e., no annotations for the body parts, but only optimize the re-identification loss. Surprisingly, the trained two-stream network maintains to decompose the appearance and part information into two streams: one stream corresponds to the appearance maps and the other corresponds to the body part maps.

We visualize the distribution of the learned local appearance and part descriptors using t-SNE [37] as shown in Figs. 3(a) and (b). Figure 3(a) shows that the appearance descriptors are clustered depending on the appearance while being independent on the parts that they come from. For example, the red/yellow box shows that the red/black-colored patches are closely embedded, respectively. By contrast, Fig. 3(b) illustrates that the local part embedding maps the similar body parts into close regions regardless of color. For example, the green/blue box shows that the features from the head/lower leg are clustered, respectively. In addition, physically adjacent body parts, such as head–shoulder and shoulder–torso, are also closely embedded.

To understand how the learned appearance/part descriptors are used in person re-identification, we visualize the appearance maps \(\mathbf {A}\) and the part maps \(\mathbf {P}\) following the visualization used in SIFTFlow [32], as shown in Fig. 4. For a given input image (left), the appearance (center) and part (right) maps encode the appearance and body parts, respectively. The figure shows how the appearance maps differentiate different persons while being invariant for each person. By contrast, the part maps encode the body parts independently from their appearance. In particular, a certain body part is represented by a similar color across images, which confirms our observation in Fig. 3 that the part features from physically adjacent regions are closely embedded.

Our approach learns the optimal part descriptor for person re-identification, rather than relying on the pre-defined body parts. Figure 5 qualitatively compares the conventional body part descriptor and the one learned by our approach.Footnote 2 In the previous works on human pose estimation [4, 41, 61], human poses are represented as a collection of pre-defined key body joint locations. It corresponds to a part descriptor which one-hot encodes the key body joints depending on the existence of a certain body joint at the location, e.g., \(p_{knee}=1\) on knee and 0 otherwise. Compared to the baseline, ours smoothly maps the body parts. In other words, the colors are continuous over the whole body in ours, which implies that the adjacent body parts are mapped closely. By contrast, the baseline not always maps adjacent body parts maps closely. For example, the upper leg between the hip and knee is more close to the background descriptors than to ankle or knee descriptors. This smooth mapping makes our method to work robustly against the pose estimation error because the descriptors do not change rapidly along the body parts and therefore are insensitive to the error in estimation. In addition, the part descriptors adopt to distinguish the informative parts more finely. For example, the mapped color varies sharply from elbow to shoulder and differentiates the detailed regions. Based on these properties, the learned part descriptors better support the person re-identification task and improve the accuracy.

5 Implementation Details

Network Architecture. We use a sub-network of the first version of GoogLeNet [54] as the appearance map extractor \(\mathcal {A}\), from the image input of size \(160 \times 80\) to the output of inception4e, which is followed by a \(1\times 1\) convolution layer and a batch normalization layer to reduce the dimension to 512 (Fig. 2). Moreover, we optionally adopt dilation filters in the layers from the inception4a to the final layer, resulting in \(20 \times 10\) response maps. Figure 2 illustrates the architecture of the part map extractor \(\mathcal {P}\). We use a sub-network of the OpenPose network [4], from the image input to the output of stage2 (i.e., concat_stage3) to extract 185 pose heat maps, which is followed by a \(3\times 3\) convolution layer and a batch normalization layer, thereby outputting 128 part maps. We adopt the compact bilinear pooling [14] to aggregate the two feature maps into a 512-dimensional vector \(\mathbf {f}\).

Compact Bilinear Pooling. The bilinear transformation over the 512-dimensional appearance vector and the 128-dimensional part vector results in an extremely high dimensional vector, which consumes large computational cost and memory. To resolve this issue, we use the tensor sketch approach [44] to compute a compact representation as in [14]. The key idea of the tensor sketch approach is that the original inner product, on which the Euclidean distance is based, between two high-dimensional vectors can be approximated as an inner product of the dimension-reduced vectors, which are random projections of the original vectors. Details can be found in [44].

Network Training. The appearance map extractor \(\mathcal {A}\) and part map extractor \(\mathcal {P}\) are fine-tuned from the network pre-trained on ImageNet [46] and COCO [29], respectively. The added layers are initialized following [17]. We use the stochastic gradient descent algorithm. The initial learning rate, weight decay, and the momentum are set to 0.01, \(2 \times 10^{-4}\), and 0.9, respectively. The learning rate is decreased by a factor of 5 after every 20, 000 iterations. All the networks are trained for 75, 000 iterations.

We follow [75] to sample a mini-batch of samples at each iteration and use all the possible triplets within each mini-batch. The gradients are computed using the acceleration trick presented in [75]. In each iteration, we sample a mini-batch of 180 images, e.g., there are on average 18 identities with each containing 10 images. In total, there are approximately \(10^2\cdot (180-10)\cdot 18 \approx 3 \times 10^5\) triplets in each iteration.

6 Experiments

6.1 Datasets

Market-1501 [79]. This dataset is one of the largest benchmark datasets for person re-identification. Six cameras are used: five high-resolution cameras and one low-resolution camera. There are 32, 668 DPM-detected pedestrian image boxes of 1, 501 identities: 750 identities are utilized for training and the remaining 751 identities are used for testing. There are 3, 368 query images and 19, 732 gallery images with 2, 793 distractors.

CUHK03 [25]. This dataset consists of 13, 164 images of 1, 360 people captured by six cameras. Each identity appears in two disjoint camera views (i.e., 4.8 images in each view on average). We divided the train/test set following the previous work [25]. For each test identity, two images are randomly sampled as the probe and gallery images and the average accuracy over 20 trials is reported as the final result.

CUHK01 [24]. This dataset comprises 3884 images of 971 people captured in two disjoint camera views. Two images are captured for each person from each of the two cameras (i.e., a total of four images). Experiments are performed under two evaluation settings [1], using 100 and 486 test IDs. Following the previous works [1, 7, 10, 75], we fine-tuned the model from the one learned from the CUHK03 training set for the experiments with 486 test IDs.

DukeMTMC [45]. This dataset is originally proposed for video-based person tracking and re-identification. We use the fixed train/test split and evaluation setting following [31]Footnote 3. It includes 16, 522 training images of 702 identities, 2, 228 query images of 702 identities and 17, 661 galley images.

MARS [77]. This dataset is proposed for video-based person re-identification. It consists of 1261 different pedestrians captured by at least two cameras. There are 509, 914 bounding boxes and 8, 298 tracklets from 625 identities for training and 681, 089 bounding boxes and 12, 180 tracklets from 636 identities for testing.

6.2 Evaluation Metrics

We use both the cumulative matching characteristics (CMC) and mean average precision (mAP) to evaluate the accuracy. The CMC score measures the quality of identifying the correct match at each rank. For multiple ground truth matches, CMC cannot measure how well all the images are ranked. Therefore, we report the mAP scores for Market-1501, DukeMTMC, and MARS where more than one ground truth images are in the gallery.

Fig. 6.
figure 6

(a) Comparison of different pooling methods on the appearance maps. (c) Comparing models, with and without part maps, on different datasets. (d) Comparing models, with and without part maps, on different architectures of the appearance map extractor. If not specified, the results are reported on Market-1501. (b) Comparison of different methods to aggregate the appearance and part maps.

6.3 Comparison with the Baselines

We compare the proposed method with the baselines in three aspects. In this section, when not specified, all the experiments are performed on the Market-1501 dataset, all the models do not use dilation, and \(\mathcal {P}_{pose}\) is trained together with the other parameters.

Effect of Part Maps. We compare our method with a baseline that does not explicitly use body parts. As a baseline network, we use the appearance map extractor of Eq. (1), which is followed by a global spatial average pooling and a fully connected layer, thereby outputting the 512-dimensional image descriptor. Figures 6(a) and (b) compare the proposed method with the baseline, while varying the training strategy: fixing and training \(\mathcal {P}_{pose}\). Fixing \(\mathcal {P}_{pose}\) initializes \(\mathcal {P}_{pose}\) using the pre-trained weights [4, 29] and fixes the weight through the training. Training \(\mathcal {P}_{pose}\) also initializes \(\mathcal {P}_{pose}\) in the same way, but fine-tunes the network using the loss of Eq. (7) during training. Figure 6(a) illustrates the accuracy comparison on three datasets, Market-1501, MARS, and Duke. It shows that using part maps consistently improves the accuracy on all the three datasets from the baseline. In addition, training \(\mathcal {P}_{pose}\) largely improves the accuracy than fixing \(\mathcal {P}_{pose}\). It implies that the part descriptors are adopted to better serve the person re-identification task. Figure 6(b) shows the accuracy comparison while varying the appearance sub-network architecture. Similarly, the baseline accuracy is improved when part maps are introduced and further improved when \(\mathcal {P}_{pose}\) is fine-tuned during training.

Effect of Bilinear Pooling. Figure 6(c) compares the proposed method (bilinear) to the baseline with a different aggregator. For the given appearance and part maps, concat+averagepool+linear generates a feature vector by concatenating two feature maps, spatially average pooling, and feeding through a fully connected layer, resulting in a 512-dimensional vector. The result shows that bilinear pooling consistently achieves higher accuracy than the baseline, for both cases when \(\mathcal {P}_{pose}\) is fixed/trained.

Comparison with Previous Pose-Based Methods. Finally, we compare our method with three previous works [50, 74, 78], which use human pose estimation, on Market-1501. For a fair comparison, we use the reduced CPM(R-CPM [\(\sim \)3M param]) utilized in [50]Footnote 4 as \(\mathcal {P}_{pose}\). The complexity of the R-CPM is lower than the standard FCN (\(\sim \)6M param) used in [74] and CPM (30M param) used in [78]. As the appearance network, [74] used GoogLeNet and [78] used ResNet50. [50] used 13 inception modules, whereas we use 7. Table 1 shows the comparison. In comparison with the method adopted by [50, 74, 78], the proposed method (Inception V1, R-CPM) achieves an increase of 4% and 9% for rank@1 accuracy and mAP, respectively. It shows that our method effectively uses the part information compared with the previous approaches.

6.4 Comparison with State-of-the-Art Methods

Market-1501. Table 1 shows the comparison over two query schemes, single query and multi-query. Single query takes one image from each person whereas multi-query takes multiple images. For the multi-query setting, one descriptor is obtained from multiple images by averaging the feature from each image. Our approach achieves the best accuracy in terms of both mAP and rank@K for both single and multi-query. We also provide the result after re-ranking [85], which further boosts accuracy. In addition, we conduct the experiment over an expanded dataset with additional 500K images [79]. Following the standard evaluation protocol [19], we report the results over four different gallery sets, 19, 732, 119, 732, 219, 732, and 519, 732, using two evaluation metrics (i.e., rank-1 accuracy and mAP). Table 2 reports the results. The proposed method outperforms all the other methods.

Table 1. Accuracy comparison on Market-1501
Table 2. Accuracy comparison on Market-1501+500k.

CUHK03. We report the results with two person boxes: manually labeled and detected. Table 3 presents the comparison with existing solutions. In the case of detected boxes, the state-of-the-art accuracy is achieved. With manual bounding boxes, our method also achieves the best accuracy.

Table 3. Accuracy comparison on CUHK03 and CUHK01
Table 4. Accuracy comparison on DukeMTMC
Table 5. Accuracy comparison on MARS

CUHK01. We compare the results with two evaluation settings (i.e., 100 and 486 test IDs) in Table 3. For 486 test IDs, the proposed method shows the best result. For 100 test IDs, our method achieves the second best result, following [16]. Note that [16] fine-tuned the model which is learned from the CUHK03+Market1501, whereas we trained the model using 871 training IDs of the CUHK01 dataset, following the settings in previous works [1, 7, 10, 75].

DukeMTMC. We follow the setting in [31] to conduct the experiments. Table 4 reports the results. The proposed method achieves the best result for both with and without re-ranking.

MARS. We also evaluate our method on one video-based person re-identification dataset [77]. We use our approach to extract the representation for each frame and aggregate the representations of all the frames using temporal average pooling, which shows similar accuracy to other aggregation schemes (RNN and LSTM). Table 5 presents the comparison with the competing methods. Our method shows the highest accuracy over both image-based and video-based approaches.

7 Conclusions

We propose a new method for person re-identification. The key factors that contribute to the superior performance of our approach are as follows. (1) We adopt part maps where parts are not pre-defined but learned specially for person re-identification. They are learned to minimize the re-identification loss with the guidance of the pre-trained pose estimation model. (2) The part map representation provides a fine-grained/robust differentiation of the body part depending on their usefulness for re-identification. (3) We use part-aligned representations to handle the body part misalignment problem. The resulting approach achieves superior/competitive person re-identification performances on the standard image and video benchmark datasets.