Abstract
This paper addresses the problem of unsupervised domain adaptation on the task of pedestrian detection in crowded scenes. First, we utilize an iterative algorithm to iteratively select and autoannotate positive pedestrian samples with high confidence as the training samples for the target domain. Meanwhile, we also reuse negative samples from the source domain to compensate for the imbalance between the amount of positive samples and negative samples. Second, based on the deep network we also design an unsupervised regularizer to mitigate influence from data noise. More specifically, we transform the last fully connected layer into two sublayers — an elementwise multiply layer and a sum layer, and add the unsupervised regularizer to further improve the domain adaptation accuracy. In experiments for pedestrian detection, the proposed method boosts the recall value by nearly \(30\,\%\) while the precision stays almost the same. Furthermore, we perform our method on standard domain adaptation benchmarks on both supervised and unsupervised settings and also achieve stateoftheart results.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Deep neural networks have shown great power on traditional computer vision tasks, however, the labelled dataset should be large enough to train a reliable deep model. The annotation process for the task of pedestrian detection in crowded scenes is even more resource consuming, because we need to label concrete locations of pedestrian instances. In modern society, there are over millions of cameras deployed for surveillance. However, these surveillance situations vary in lights, background, viewpoints, camera resolutions and so on. Directly utilizing models trained on old scenes will result in poor performance on new situations due to data distribution changes. It is also unpractical to annotate pedestrian instances for every surveillance situation.
When there are few or no labelled data in the target domain, domain adaptation helps to reduce the amount of labelled data needed. Basically, unsupervised domain adaptation aims to shift the model trained from the source domain to the target domain for which only unlabelled data are provided. Most traditional works [1–5] either learn a shared representation between the source and target domain, or project features into a common subspace. Recently, there are also works [6–8] proposed to learn a scenespecific detector by deep architectures. However, heuristic methods are needed either for constructing feature space or reweighting samples. Our motivation of developing a domain adaptation architecture is to reduce heuristic methods required during the adaptation process.
In this paper, we propose a new approach for unsupervised deep domain adaptation for pedestrian detection. First, we utilize an iterative algorithm to iteratively autoannotate target examples with high confidence as positive pedestrian instances on the target domain. During each iteration, these autoannotated data are regarded as the training set to update the target model. However, these autoannotated samples still have the limitations of lack of negative samples and existence of false positive samples, which will no doubt lead to exploration of predictions on nonpedestrian instances. Therefore, in order to compensate for the quantitative imbalance between positive and negative samples, we randomly sample negative instances from the source domain and mix into training set. Second, based on deep network, we further design an unsupervised regularizer to mitigate influence from data noise and avoid overfitting. More specifically, in order to have a better regularization effect during the adaptation process, we propose to transform the last fully connected layer of the deep model into two sublayers, an elementwise multiply layer and a sum layer. Thus, the unsupervised regularizer can be added on the elementwise multiply layer to adjust all weights in the deep network and gain better performance.
The contributions of our work are three folds.

We propose an adaptation framework to learn scenespecific deep detectors for target domains by unsupervised methodologies, which adaptively selects positive instances with high confidence. This can be easily deployed to various surveillance situations without any additional annotations.

Under this framework, we combine both supervised term and unsupervised regularizer into our loss function. The unsupervised regularizer helps to reduce influence from data noise in the autoannotated data.

More importantly, for better performance of the unsupervised regularizer we propose to transform the last fully connected layer of the deep network into two sublayers, an elementwise multiply layer and a sum layer. Thus, all weights contained in the deep network can be adjusted under the unsupervised regularizer. To the best of our knowledge, this is the first attempt to transform fully connected layers for the purpose of domain adaptation.
The remainder of this paper is organized as follows. Section 2 reviews related works. Section 3 presents the details of our approach. Experimental results are shown in Sect. 4. Section 5 concludes the paper.
2 Related Work
In many detection works, the generic model trained from large amount of samples on the source domain is directly utilized to detect on the target domain. They assume that samples on the target domain are subsets of the source domain. However, when the distribution of data on the target and source domain vary largely, the performance will drop significantly. Domain adaptation aims to reduce the amount of data needed for the target domain.
Many domain adaptation works try to learn a common representation space shared between the source and target domain. Saenko et al. [1, 2] propose both lineartransformbased techniques and kerneltransformbased techniques to minimize domain changes. Gopalan et al. [3] project features into Grassmann manifold instead of operating on features of raw data. Alternatively, Mesnil et al. [9] use transfer learning to obtain good representations. However, these methods have limitations since scenespecific features are not learned to boost accuracy.
Another group of works [4, 5, 10, 11] on domain adaptation is to make the distribution of the source and target domain more similar. Among these works, Maximum Mean Discrepancy (MMD) [12] is used to as a metric to reselect samples from the source domain in order to have similar distribution as target samples. In [13], MMD is added on the last feature vector of the network as a regularization. Different from these methods, our work transforms the last fully connected layer into two sublayers, an elementwise multiply layer and a sum layer. As the elementwise multiply layer is the last layer that contains weights before output layers, our unsupervised regularizer on the elementwise multiply layer can adjust all weights of the deep network during training.
There are also works on deep adaptation to construct scenespecific detectors. Wang et al. [6] explore context cues to compute confidence, [7] learn distributions of target samples and propose a cluster layer for scenespecific visual patterns. These works reweight autoannotated samples for their final object function and additional context cues are needed for reliable performance. However, heuristic methods are required to select reliable samples. Alternately, Hattori et al. [8] learn scenespecific detector by generating a spatiallyvarying pedestrian appearance model. And Pishchulin et al. [14] use 3D shape models to generate training data. However, synthesis for domain adaptation is also costly. Compared with these methods, our approach does not include the heuristic preprocessing steps. Thus, the performances of our approach are not affected by the preprocessing steps.
3 Our Approach
In this section, we introduce our unsupervised domain adaptation architecture on the task of pedestrian detection in crowded scenes. Unsupervised domain adaptation aims to shift the model trained from the source domain to the target domain for which only unlabelled data are provided. Under the unsupervised setting, we use an iterative algorithm to iteratively autoannotate target samples and update the target model. As the autoannotated samples may contain noises, the performances may be affected by the wrongly annotated samples. Therefore, an unsupervised regularizer is introduced to mitigate the influence from data noise on the target model. More specifically, based on the assumption that the source domain and the target domain should share the same feature space after feature extraction layers, we encode the unsupervised regularizer to make a constraint that the distribution of data representation on the elementwise multiply layer should be similar between the source domain and the target domain.
The adaptation architecture of our approach consists of three parts – the source stream, the target stream and an unsupervised regularizer, as shown in Fig. 1. The source stream takes samples from the source domain as input, while the target stream is trained from autoannotated positive samples from the target domain and negative samples from the source domain. These two streams can utilize any deep detection network as their basic model, as well as their detection loss function as supervised loss functions of two streams. In our experiments, we use the detection network mentioned in Sect. 4.1 as the basic model. The unsupervised regularizer is integrated into the loss function of the target stream.
In the following, we will first describe our iterative algorithm which iteratively selects samples from the target domain, and updates the target model accordingly (Sect. 3.1). Then, we will introduce the loss function we designed for updating the target model (Sect. 3.2), as well as the proposed unsupervised regularizer for improving the domain adaptation performance (Sect. 3.3).
3.1 Iterative Algorithm
In this section, we introduce the iterative algorithm which is the training method of the target stream of our adaptation architecture. There are two reasons to employ the iterative algorithm. First, autoannotated data on the target domain vary for every adaptation iteration and new positive samples will be autoannotated as training set. Compared to methods without the iterative algorithm, it helps to avoid overfitting caused by lack of data. Second, unsupervised regularizer performs better with more training data as it’s a distribution based regularizer.
There are two stages for the iterative algorithm. The source stream and the target stream are separately trained at different stages. At initialization stage, the source model of the source stream are trained under a supervised loss function with abundant labelled data, (\(\mathbf{X}^{S}\),\(\mathbf{Y}^{S}\)), from the source domain. After its convergence, the weights of the source model \(\theta ^{S}\) are taken to initialize the target stream. At adaptation stage, the target model is trained from autoannotated positive samples (\(\mathbf{X}^{T,n}\),\(\mathbf{Y}^{T,n}\)) from the target domain and randomlyselected negative samples (\(\mathbf{X}^{S,n}\),\(\mathbf{Y}^{S,n}\)) from the source domain under both supervised loss function and unsupervised regularizer. Since autoannotated data are all regarded as positive samples, negative samples from the source domain are randomly selected to compensate for lack of negative instances, which are human annotated and can thus provide true negative samples. Note that we do not jointly train two streams at adaptation stage and the weights of the source model stay static which serves as a distribution reference for the unsupervised regularizer at the adaptation stage. The complete adaptation process is illustrated in Algorithm 1. After a predetermined iteration limit \(N^{I}\) is reached, we obtain our final detection model on the target domain.
3.2 Loss Function for the Target Stream
In this section, we introduce our loss function on the target stream of our adaptation architecture, which is composed of a supervised loss and an unsupervised regularizer. The supervised loss is to learn the scenespecific bias for the target domain, while the unsupervised regularizer introduced in Sect. 3.3 plays an important part in reducing influence from data noise as well as avoiding overfitting.
We denote training samples from the source domain as \(\mathbf{X}^{S} = \{x^{S}_{i}\}^{N^{S}}_{i=1}\). For training samples on the source domain, we have corresponding annotations \(\mathbf{Y}^{S} = \{y^{S}_{i}\}^{N^{S}}_{i=1}\) with \(y^{S}_{i} = (b^{S}_{i},l^{S}_{i})\), where \(b^{S}_{i} = (x,y,w,h) \in R^{4}\) is the bounding box location and \(l^{S}_{i} \in \{0,1\}\) is the label indicating whether \(x^{S}_{i}\) is a pedestrian instance. At the \(n^{th}\) adaptation iteration, we have two set of training samples, \(N^{T,n}\) autoannotated positive samples from the target domain \(\mathbf{X}^{T,n} = \{x^{T,n}_{j}\}^{N^{T,n}}_{j=1}\) and \(N^{T,n}\) negative samples from the source domain \(\mathbf{X}^{S,n} = \{x^{S,n}_{k}\}^{N^{T,n}}_{k=1}\). Their corresponding annotations can be denoted as \(\mathbf{Y}^{T,n} = \{y^{T,n}_{j}\}^{N^{T,n}}_{j=1}\) and \(\mathbf{Y}^{S,n} = \{y^{S,n}_{k}\}^{N^{T,n}}_{k=1}\) with \(y^{T,n}_{j} = (b^{T,n}_{j},l^{T,n}_{j}\equiv 1,c^{T,n}_{j})\), and \(y^{S,n}_{k} = (b^{S,n}_{k},l^{S,n}_{k}\equiv 0)\), respectively. \(c^{T,n}_{*}\) is the confidence given by the autoannotation tool and \(N^{I}\) is the maximum number of adaptation iterations. Now we can formulate the combination of supervised loss and unsupervised regularizer as follows:
where \(L_{S}\) is the supervised loss to learn scenespecific detectors and \(L_{U}\) is the unsupervised regularizer part. \(\alpha =0.8\) is the coefficient balancing the effect of supervised and unsupervised loss. \(\theta ^{T,n}\) denote the coefficients of the network in the target stream at \(n^{th}\) adaption and \(\theta ^{S}\) denote the coefficients of the network in the source stream. \(H(\cdot )\) is a step function in order to select positive samples with high confidence among autoannotated data on the target domain. \(R(\cdot )\) is a regression loss for bounding box locations, such as norm1 loss, and \(C(\cdot )\) is a classification loss for bounding box confidence, such as crossentropy loss. And \(L_{EWM}(\cdot )\), to be introduced in Sect. 3.3, is a MMDbased loss added on the elementwise multiply layer for unsupervised regularization.
3.3 Unsupervised Weights Regularizer on ElementWise Multiply Layer
As mentioned before, the unsupervised regularizer plays an important role in reducing influence from data noise and avoiding overfitting. In this paper, we propose to transform the last fully connected layer in order to have better effect on unsupervised regularization.
ElementWise Multiply Layer. In deep neural network, the data of the last feature vector layer is taken as an important data representation of input images. However, in this paper, we take one step further to focus on the last fully connected layer which serves as an decoder to decode rich information of the last feature vector into final outputs. As the source model is trained with abundant labelled data on the source domain, the weights of the last fully connected layer are also well converged. A regularizer on the last fully connected layer can adjust all weights of the network compared with that on the last feature vector layer. Denote the last feature vector, the weights of the last fully connected layer and the final outputs as \(\varvec{f}_{(1\times N^{D})}\), \(\mathbf{C}_{(N^{D}\times N^{O})}\) and \(\varvec{p}_{(1\times N^{O})}\). \( N^{D}, N^{O}\) are the dimension of feature vector and the dimension of output layer, respectively. Thus the operation of the fully connected layer can be formulated as matrix multiply:
where
Inspired by this form, we separate the above formula into two suboperations – the elementwise multiply operation and the sum operation, which can be formulated as:
where \(\mathbf{M}_{(N^{O}\times N^{D})} = [\varvec{m}_{o}]\) is the intermediate results of the elementwise multiply operation. \(\varvec{m}_{o}\) is a vector with \(N^{D}\) dimensions, which will be the object of the unsupervised regularizer. Finally, we can equivalenttransform the last fully connected layer into an elementwise multiply layer and a sum layer. The transformed elementwise multiply layer is thus the last layer with weights before output layers. Figure 2 illustrates the transformation.
Unsupervised Regularizer on ElementWise Multiply Layer. This section introduces our unsupervised regularizer. As stated in Sect. 3.1, there are false positive samples among autoannotated data, which will mislead the network and result in worse performance. Thus, we designed an unsupervised regularizer to mitigate the influence. We have the assumption that the weights of the elementwise multiply layer of the last fully connected layer have well converged under the training of abundant source samples. Thus, when tasks are similar, the distribution of data representations of the elementwise multiply layer on the source domain and the target domain should also be similar. While false samples are easier to mutate the distribution of data representations. This observation can be illustrated in Fig. 3, where the center of \(m_{o}\) of true target samples is far closer to the center of source samples, compared to that of false target samples. Confining that the distribution of data representations between the source and target domain to be similar helps to reduce the influence caused by data noise to some extent.
To encode this similarity, we utilize MMD (maximum mean discrepancy) [12] to compute distance between distributions of the elementwise multiply layers of the source domain and the target domain:
which can also interpreted as the Euclidean distance between the center of \(\varvec{m}^{T,n}_{o}\) and \(\varvec{m}^{S}_{o}\) across all output dimensions. As a comparison, the MMD regularizer on feature vector layer can be formulated as:
where \(\varvec{f}\) is data of the feature vector layer in Eq. 4 and \(\varvec{m}_{o}\) is the data of the elementwise multiply layer in Eq. 6.
Since it’s unpractical to get the distribution of the whole training set, while too few images cannot obtain a stable distribution for regularization. In our experiments, the \(L_{EWM}(\cdot )\) loss is calculated for every batch. An example comparison of centers of \(\varvec{m}^{S}_{o}\) of different batches are shown in Fig. 4.
4 Experiment Results
In this section, we introduce our experiment results on both surveillance applications and the standard domain adaptation dataset. We firstly evaluate our approach on video surveillance. Then we employ our approach to standard domain adaptation benchmarks on both supervised and unsupervised settings to demonstrate the effectiveness of our method.
4.1 Domain Adaptation on Crowd Dataset
Dataset and Evaluation Metrics. To show the effectiveness of our domain adaptation approach for pedestrian detection, we collected a dataset^{Footnote 1} consisting of 3 target scenes for the target domain. These three scenes contain 1308, 1213 and 331 unlabelled images, respectively. For each scene, 100 images are annotated for evaluation. Instead of labelling the whole body of a person, we label the head of a person as bounding box during training. The motivation for labelling only pedestrian heads comes from detection of indoor pedestrian or in crowded scenes, where the body of a person may be invisible. The dataset for the source domain are Brainwash Dataset [15].
Our evaluation metrics for detection uses the protocol defined in PASCAL VOC [16]. To judge a predicted bounding box whether correctly matches a ground truth bounding box, their intersection over their union must exceed 50 %. And Multiple detections of the same ground truth bounding box are regarded as one correct prediction. For overall performance evaluation, the F1 score \(F1 = 2*precision*recall/(precision+recall)\) [17] are utilized. Higher F1 score means better performance. At the same time, the precisionrecall curves are also plotted.
Experimental Settings. Our generic detection model of adaptation architecture can be implemented by many deep detection models. In our experiment, we use the model proposed by Stewart et al. [15], which is an end to end detection network without any precomputed region proposals needed. For each iteration, 100 autoannotated images from the target domain and 1000 annotated images from the source domain are alternatively used for training. The outputs of our detection network include bounding box locations and corresponding confidences, thus there are two fully connected layers between the last feature vector layer and the final outputs. In our experiments, when an unsupervised regularizer on the elementwise multiply layer predicting box confidence is added already, the unsupervised regularizer on the elementwise multiply layer predicting bounding box locations have little performance improvement. Experiments on 3 target scenes are executed separately.
Comparison with Different Methods. To demonstrate the effectiveness of our approach, 5 methods are compared among which method \(L_{S}(\mathbf{X}^{T,n},\mathbf{X}^{S,n})+L_{EWM}\) is our final approach:
 \(L_{S}(\mathbf{X}^{S})\) :

Source model only trained from the source domain.
 \(L_{S}(\mathbf{X}^{T,n})\) :

Only autolabeled samples on the target domain are used for training, and without any unsupervised regularizer.
 \(L_{S}(\mathbf{X}^{T,n})+L_{EWM}\) :

Only autolabeled samples on the target domain are used for training, with an unsupervised MMD regularizer added on the last elementwise multiply layer.
 \(L_{S}(\mathbf{X}^{T,n},\mathbf{X}^{S,n})+L_{FV}\) :

Both autolabeled images from the target domain and labeled images from the source domain are alternately sampled for training, with an unsupervised MMD regularizer [13] added on the last feature vector layer.
 \(L_{S}(\mathbf{X}^{T,n},\mathbf{X}^{S,n})+L_{EWM}\) :

Both autolabeled images from the target domain and labeled images from the source domain are alternately sampled for training, with an unsupervised MMD regularizer added on the last elementwise multiply layer.
Figure 5 plots the precisionrecall curves of the above comparison methods in target scene 1. Also, the changes of F1 score of every adaptation iteration are also depicted in Fig. 6. Table 1 gives concrete precision and recall value of the 5 comparison methods on three target scenes when the F1 scores are at their highest. Examples of adaptation results are shown in Fig. 7.
Performance Evaluation. From the Table 1, we have the following observations:

Compared to method \(L_{S}(\mathbf{X}^{S})\), the recall values of other methods, which all utilize iterative algorithm for training, are explicitly larger. This implies the effectiveness of our iterative algorithm on boosting recall.

The average F1 score of \(L_{S}(\mathbf{X}^{T,n})+L_{EWM}\) are larger than that of method \(L_{S}(\mathbf{X}^{T,n})\). Also, the average (1precision) value of \(L_{S}(\mathbf{X}^{T,n})+L_{EWM}\) is far smaller. Their difference in whether the unsupervised regularizer is added into loss function demonstrates that our unsupervised regularizer can mitigate the influence of data noise and thus boost F1 score.

Compared to method \(L_{S}(\mathbf{X}^{T,n})+L_{EWM}\), the average F1 score of method \(L_{S}(\mathbf{X}^{T,n},\mathbf{X}^{S,n})+L_{EWM}\) is higher. This demonstrate the effectiveness of negative source samples added into the training set during adaptation process.

Compared to method \(L_{S}(\mathbf{X}^{T,n},\mathbf{X}^{S,n})+L_{FV}\), the recall values of method \(L_{S}(\mathbf{X}^{T,n},\mathbf{X}^{S,n})+L_{EWM}\) are further increased. This shows that unsupervised regularizer added on the elementwise layer will provide better regularizer effect compared to that on the feature vector layer.

Our final method \(L_{S}(\mathbf{X}^{T,n},\mathbf{X}^{S,n})+L_{EWM}\) achieves best results on target scene 1 and target scene 3. The performance on target scene 2 is rather close to the best result, which may result from large discrepancy of background between the source and target domain.
4.2 Domain Adaptation on Standard Classification Benchmark
In order to further demonstrate the effectiveness and generalization of our adaptation architecture, we test our method on the standard domain adaptation benchmark Office dataset [1].
Office Dataset and Experimental Settings. The Office dataset comprises 31 categories of objects from 3 domains (Amazon, DSLR, Webcam). Example images are depicted in Fig. 8. We take Amazon domain as the source domain and Webcam domain as the target domain. We follow the standard protocol for both supervised and unsupervised settings. We reused the architecture in pedestrian detection and utilize AlexNet [18] as the generic model of both streams.
Performance Evaluation. In Table 2, we compare our approach with other seven recently published works in both supervised and unsupervised settings. The outstanding performance on both settings confirms the effectiveness of our iterative algorithm and MMD regularizer on the elementwise multiply layer.
5 Conclusions
In this paper, we introduce an adaptation architecture to learn scenespecific deep detectors for the target domains. Firstly, an iterative algorithm is utilized to iteratively autoannotate target samples and update the target model. As autoannotated data are lack of negative samples and contain data noise, we randomly sample negative instances from the source domain. At the same time, an unsupervised regularizer is also designed to mitigate influence from data noise. More importantly, we propose to transform the last fully connected layer into an elementwise multiply layer and a sum layer for better regularizer effect.
Notes
 1.
Our dataset will be made available on http://wylin2.drivehq.com/.
References
Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010)
Kulis, B., Saenko, K., Darrell, T.: What you saw is not what you get: domain adaptation using asymmetric kernel transforms. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1785–1792 (2011)
Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: an unsupervised approach. In: IEEE International Conference on Computer Vision (ICCV), pp. 999–1006 (2011)
Huang, J., Gretton, A., Borgwardt, K.M., Schölkopf, B., Smola, A.J.: Correcting sample selection bias by unlabeled data. In: Advances in Neural Information Processing Systems (NIPS), pp. 601–608 (2006)
Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., Schölkopf, B.: Covariate shift by kernel mean matching. Dataset Shift Mach. Learn. 3(4), 5 (2009)
Wang, X., Wang, M., Li, W.: Scenespecific pedestrian detection for static video surveillance. IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 361–374 (2014)
Zeng, X., Ouyang, W., Wang, M., Wang, X.: Deep learning of scenespecific classifier for pedestrian detection. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part III. LNCS, vol. 8691, pp. 472–487. Springer, Heidelberg (2014)
Hattori, H., Naresh Boddeti, V., Kitani, K.M., Kanade, T.: Learning scenespecific pedestrian detectors without real data. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3819–3827 (2015)
Mesnil, G., Dauphin, Y., Glorot, X., Rifai, S., Bengio, Y., Goodfellow, I.J., Lavoie, E., Muller, X., Desjardins, G., WardeFarley, D., et al.: Unsupervised and transfer learning challenge: a deep learning approach. In: ICML Unsupervised and Transfer Learning Workshop, vol. 27, pp. 97–110 (2012)
Gong, B., Grauman, K., Sha, F.: Connecting the dots with landmarks: discriminatively learning domaininvariant features for unsupervised domain adaptation. In: International Conference on Machine Learning (ICML), pp. 222–230 (2013)
Ghifary, M., Kleijn, W.B., Zhang, M.: Domain adaptive neural networks for object recognition. In: Pham, D.N., Park, S.B. (eds.) PRICAI 2014. LNCS, vol. 8862, pp. 898–904. Springer, Heidelberg (2014)
Gretton, A., Borgwardt, K.M., Rasch, M., Schölkopf, B., Smola, A.J.: A kernel method for the twosampleproblem. In: Advances in Neural Information Processing Systems, pp. 513–520 (2006)
Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014)
Pishchulin, L., Jain, A., Wojek, C., Andriluka, M., Thormählen, T., Schiele, B.: Learning people detection models from few training samples. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1473–1480 (2011)
Stewart, R., Andriluka, M., Ng, A.: End to end people detection in crowded scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Everingham, M., Eslami, S.A., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vision 111(1), 98–136 (2015)
Powers, D.M.: Evaluation: from precision, recall and fmeasure to roc, informedness, markedness and correlation (2011)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)
Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2066–2073 (2012)
Fernando, B., Habrard, A., Sebban, M., Tuytelaars, T.: Unsupervised visual domain adaptation using subspace alignment. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2960–2967 (2013)
Tommasi, T., Caputo, B.: Frustratingly easy nbnn domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 897–904 (2013)
Chopra, S., Balakrishnan, S., Gopalan, R.: Dlid: deep learning for domain adaptation by interpolating between domains. In: ICML Workshop on Challenges in Representation Learning, vol. 2 (2013)
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531 (2013)
Acknowledgments
The work is partially funded by the following grants: DFG (German Research Foundation) YA 351/21, NSFC 61471235, Microsoft Research Asia Collaborative Research Award. The authors gratefully acknowledge the support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Liu, L., Lin, W., Wu, L., Yu, Y., Yang, M.Y. (2016). Unsupervised Deep Domain Adaptation for Pedestrian Detection. In: Hua, G., Jégou, H. (eds) Computer Vision – ECCV 2016 Workshops. ECCV 2016. Lecture Notes in Computer Science(), vol 9914. Springer, Cham. https://doi.org/10.1007/9783319488813_48
Download citation
DOI: https://doi.org/10.1007/9783319488813_48
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319488806
Online ISBN: 9783319488813
eBook Packages: Computer ScienceComputer Science (R0)