Correcting the Triplet Selection Bias for Triplet Loss

  • Baosheng YuEmail author
  • Tongliang Liu
  • Mingming Gong
  • Changxing Ding
  • Dacheng Tao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11210)


Triplet loss, popular for metric learning, has made a great success in many computer vision tasks, such as fine-grained image classification, image retrieval, and face recognition. Considering that the number of triplets grows cubically with the size of training data, triplet selection is thus indispensable for efficiently training with triplet loss. However, in practice, the training is usually very sensitive to the selection of triplets, e.g., it almost does not converge with randomly selected triplets and selecting the hardest triplets also leads to bad local minima. We argue that the bias in the selection of triplets degrades the performance of learning with triplet loss. In this paper, we propose a new variant of triplet loss, which tries to reduce the bias in triplet selection by adaptively correcting the distribution shift on the selected triplets. We refer to this new triplet loss as adapted triplet loss. We conduct a number of experiments on MNIST and Fashion-MNIST for image classification, and on CARS196, CUB200-2011, and Stanford Online Products for image retrieval. The experimental results demonstrate the effectiveness of the proposed method.


Triplet loss Selection bias Domain adaptation 



Baosheng Yu, Tongliang Liu, and Dacheng Tao were partially supported by Australian Research Council Projects FL-170100117, DP-180103424, LP-150100671. Changxing Ding was partially supported by the National Natural Science Foundation of China (Grant No.: 61702193) and Science and Technology Program of Guangzhou (Grant No.: 201804010272).


  1. 1.
    Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. In: CVPR (2016)Google Scholar
  2. 2.
    Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.W.: A theory of learning from different domains. Mach. Learn. 79(1), 151–175 (2010)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Ben-David, S., Blitzer, J., Crammer, K., Pereira, F., et al.: Analysis of representations for domain adaptation. In: NIPS, vol. 19, p. 137 (2007)Google Scholar
  4. 4.
    Chen, W., Chen, X., Zhang, J., Huang, K.: Beyond triplet loss: a deep quadruplet network for person re-identification. In: CVPR (2017)Google Scholar
  5. 5.
    Cheng, D., Gong, Y., Zhou, S., Wang, J., Zheng, N.: Person re-identification by multi-channel parts-based CNN with improved triplet loss function. In: CVPR, pp. 1335–1344 (2016)Google Scholar
  6. 6.
    Cui, Y., Zhou, F., Lin, Y., Belongie, S.: Fine-grained categorization and dataset bootstrapping using deep metric learning with humans in the loop. In: CVPR, pp. 1153–1162 (2016)Google Scholar
  7. 7.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255. IEEE (2009)Google Scholar
  8. 8.
    Ding, C., Tao, D.: Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE T-PAMI (2017)Google Scholar
  9. 9.
    Ding, S., Lin, L., Wang, G., Chao, H.: Deep feature learning with relative distance comparison for person re-identification. Pattern Recogn. 48(10), 2993–3003 (2015)CrossRefGoogle Scholar
  10. 10.
    Gong, M., Zhang, K., Liu, T., Tao, D., Glymour, C., Schölkopf, B.: Domain adaptation with conditional transferable components. In: ICML, pp. 2839–2848 (2016)Google Scholar
  11. 11.
    Gordo, A., Almazán, J., Revaud, J., Larlus, D.: Deep image retrieval: learning global representations for image search. arXiv preprint arXiv:1604.01325 (2016)
  12. 12.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: ICCV, pp. 1026–1034 (2015)Google Scholar
  13. 13.
    Hermans, A., Beyer, L., Leibe, B.: In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737 (2017)
  14. 14.
    Hoffer, E., Ailon, N.: Deep metric learning using triplet network. arXiv preprint arXiv:1412.6622 (2014)
  15. 15.
    Huang, J., Gretton, A., Borgwardt, K.M., Schölkopf, B., Smola, A.J.: Correcting sample selection bias by unlabeled data. In: NIPS, pp. 601–608 (2007)Google Scholar
  16. 16.
    Huang, J., Feris, R.S., Chen, Q., Yan, S.: Cross-domain image retrieval with a dual attribute-aware ranking network. In: ICCV, pp. 1062–1070 (2015)Google Scholar
  17. 17.
    Iyer, A., Nath, S., Sarawagi, S.: Maximum mean discrepancy for class ratio estimation: convergence bounds and kernel selection. In: ICML, pp. 530–538 (2014)Google Scholar
  18. 18.
    Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
  19. 19.
    Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3D object representations for fine-grained categorization. In: 3dRR (Workshop) (2013)Google Scholar
  20. 20.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  21. 21.
    Lai, H., Pan, Y., Liu, Y., Yan, S.: Simultaneous feature learning and hash coding with deep neural networks. In: CVPR, pp. 3270–3278 (2015)Google Scholar
  22. 22.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  23. 23.
    Li, Y., Gong, M., Tian, X., Liu, T., Tao, D.: Domain generalization via conditional invariant representations. In: AAAI (2018)Google Scholar
  24. 24.
    Liu, H., Feng, J., Qi, M., Jiang, J., Yan, S.: End-to-end comparative attention networks for person re-identification. IEEE T-IP (2017)Google Scholar
  25. 25.
    Liu, H., Tian, Y., Yang, Y., Pang, L., Huang, T.: Deep relative distance learning: tell the difference between similar vehicles. In: CVPR (2016)Google Scholar
  26. 26.
    Liu, T., Yang, Q., Tao, D.: Understanding how feature structure transfers in transfer learning. In: IJCAI, pp. 2365–2371 (2017)Google Scholar
  27. 27.
    van dar Maaten, L., Hinton, G.: Visualizing data using t-SNE. JMLR 9(Nov), 2579–2605 (2008)zbMATHGoogle Scholar
  28. 28.
    Oh Song, H., Xiang, Y., Jegelka, S., Savarese, S.: Deep metric learning via lifted structured feature embedding. In: CVPR, pp. 4004–4012 (2016)Google Scholar
  29. 29.
    Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE T-NN 22(2), 199–210 (2011)CrossRefGoogle Scholar
  30. 30.
    Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: BMVC, vol. 1, p. 6 (2015)Google Scholar
  31. 31.
    Prabhu, Y., Varma, M.: FastXML: a fast, accurate and stable tree-classifier for extreme multi-label learning. In: SIGKDD, pp. 263–272. ACM (2014)Google Scholar
  32. 32.
    Ramanathan, V., et al.: Learning semantic relationships for better action retrieval in images. In: CVPR (2015)Google Scholar
  33. 33.
    Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: CVPR, pp. 815–823 (2015)Google Scholar
  34. 34.
    Shi, H., et al.: Embedding deep metric for person re-identification: a study against large variations. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 732–748. Springer, Cham (2016). Scholar
  35. 35.
    Sohn, K.: Improved deep metric learning with multi-class N-pair loss objective. In: NIPS, pp. 1857–1865 (2016)Google Scholar
  36. 36.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)Google Scholar
  37. 37.
    Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The caltech-UCSD birds-200-2011 dataset (2011)Google Scholar
  38. 38.
    Wang, J., et al.: Learning fine-grained image similarity with deep ranking. In: CVPR, pp. 1386–1393 (2014)Google Scholar
  39. 39.
    Wang, L., Li, Y., Lazebnik, S.: Learning deep structure-preserving image-text embeddings. In: CVPR (2016)Google Scholar
  40. 40.
    Wang, X., Huang, T.K., Schneider, J.: Active transfer learning under model shift. In: ICML, pp. 1305–1313 (2014)Google Scholar
  41. 41.
    Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. JMLR 10(Feb), 207–244 (2009)zbMATHGoogle Scholar
  42. 42.
    Wohlhart, P., Lepetit, V.: Learning descriptors for object recognition and 3D pose estimation. In: CVPR, pp. 3109–3118 (2015)Google Scholar
  43. 43.
    Wu, C.Y., Manmatha, R., Smola, A.J., Krahenbuhl, P.: Sampling matters in deep embedding learning. In: CVPR, pp. 2840–2848 (2017)Google Scholar
  44. 44.
    Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
  45. 45.
    Xiao, T., Li, H., Ouyang, W., Wang, X.: Learning deep feature representations with domain guided dropout for person re-identification. In: CVPR, pp. 1249–1258 (2016)Google Scholar
  46. 46.
    Yuan, Y., Yang, K., Zhang, C.: Hard-aware deeply cascaded embedding. In: ICCV, pp. 814–823. IEEE (2017)Google Scholar
  47. 47.
    Zhang, K., Schölkopf, B., Muandet, K., Wang, Z.: Domain adaptation under target and conditional shift. In: ICML, pp. 819–827 (2013)Google Scholar
  48. 48.
    Zhuang, B., Lin, G., Shen, C., Reid, I.: Fast training of triplet-based deep binary embedding networks. In: CVPR, pp. 5955–5964 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Baosheng Yu
    • 1
    Email author
  • Tongliang Liu
    • 1
  • Mingming Gong
    • 2
    • 3
  • Changxing Ding
    • 4
  • Dacheng Tao
    • 1
  1. 1.UBTECH Sydney AI Centre and SIT, FEITThe University of SydneySydneyAustralia
  2. 2.Department of Biomedical InformaticsUniversity of PittsburghPittsburghUSA
  3. 3.Department of PhilosophyCarnegie Mellon UniversityPittsburghUSA
  4. 4.School of Electronic and Information EngineeringSouth China University of TechnologyGuangzhouChina

Personalised recommendations