Abstract
In the last decade, deep supervised learning has had tremendous success. However, its flaws, such as its dependency on manual and costly annotations on large datasets and being exposed to attacks, have prompted researchers to look for alternative models. Incorporating contrastive learning (CL) for self-supervised learning (SSL) has turned out as an effective alternative. In this paper, a comprehensive review of CL methodology in terms of its approaches, encoding techniques and loss functions is provided. It discusses the applications of CL in various domains like Natural Language Processing (NLP), Computer Vision, speech and text recognition and prediction. The paper presents an overview and background about SSL for understanding the introductory ideas and concepts. A comparative study for all the works that use CL methods for various downstream tasks in each domain is performed. Finally, it discusses the limitations of current methods, as well as the need for additional techniques and future directions in order to make meaningful progress in this area.
Similar content being viewed by others
Data availability
Not applicable.
References
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE, pp 248–255
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708
Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587
Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440
Liu B (2012) Sentiment analysis and opinion mining. Synth Lect Hum Lang Technol 5(1):1–167
Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805
Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2019) Albert: A lite Bert for self-supervised learning of language representations. arXiv:1909.11942
Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: a robustly optimized Bert pretraining approach. arXiv:1907.11692
Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov RR, Le QV (2019) Xlnet: Generalized autoregressive pretraining for language understanding. Adv Neural Inf Process Syst 32
Asai A, Hashimoto K, Hajishirzi H, Socher R, Xiong C (2019) Learning to retrieve reasoning paths over wikipedia graph for question answering. arXiv:1911.10470
Ding M, Zhou C, Chen Q, Yang H, Tang J (2019) Cognitive graph for multi-hop reading comprehension at scale. arXiv:1905.05460
Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) Squad: 100,000+ questions for machine comprehension of text. arXiv:1606.05250
Yang Z, Qi P, Zhang S, Bengio Y, Cohen WW, Salakhutdinov R, Manning CD (2018) Hotpotqa: a dataset for diverse, explainable multi-hop question answering. arXiv:1809.09600
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, pp 618–626
Kalantidis Y, Sariyildiz M, Weinzaepfel P, Larlus D (2020) Improving self-supervised representation learning by synthesizing challenging negatives. Naver Labs Europe
Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828
Zimmermann RS, Sharma Y, Schneider S, Bethge M, Brendel W (2021) Contrastive learning inverts the data generating process. In: International conference on machine learning. PMLR, pp 12979–12990
Ilić S, Marrese-Taylor E, Balazs JA, Matsuo Y (2018) Deep contextualized word representations for detecting sarcasm and irony. arXiv:1809.09795
Radford A, Narasimhan K, Salimans T, Sutskever I (2018) Improving language understanding by generative pre-training
Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877–1901
Van den Oord A, Li Y, Vinyals O (2018) Representation learning with contrastive predictive coding. arXiv:1807.03748
Schneider S, Baevski A, Collobert R, Auli M (2019) wav2vec: Unsupervised pre-training for speech recognition. arXiv:1904.05862
Baevski A, Zhou Y, Mohamed A, Auli M (2020) wav2vec 2.0: A framework for self-supervised learning of speech representations. Adv Neural Inf Process Syst 33:12449–12460
Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: International conference on machine learning. PMLR, pp 1597–1607
Chen X, Xie S, He K (2021) An empirical study of training self-supervised vision transformers. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9640–9649
Caron M, Touvron H, Misra I, Jégou H, Mairal J, Bojanowski P, Joulin A (2021) Emerging properties in self-supervised vision transformers. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9650–9660
Bao H, Dong L, Wei F (2021) Beit: Bert pre-training of image transformers. arXiv:2106.08254
He K, Chen X, Xie S, Li Y, Dollár P, Girshick R (2021) Masked autoencoders are scalable vision learners. arXiv:2111.06377
Lample G, Conneau A, Denoyer L, Ranzato M (2017) Unsupervised machine translation using monolingual corpora only. arXiv:1711.00043
Baevski A, Hsu W-N, Conneau A, Auli M (2021) Unsupervised speech recognition. Adv Neural Inf Process Syst 34
Hsu W-N, Tsai Y-HH, Bolte B, Salakhutdinov R, Mohamed A (2021) Hubert: how much can a bad teacher benefit ASR pre-training?. In: ICASSP 2021-2021 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6533–6537
Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J et al (2021) Learning transferable visual models from natural language supervision. In: International conference on machine learning. PMLR, pp 8748–8763
Grill J-B, Strub F, Altché F, Tallec C, Richemond P, Buchatskaya E, Doersch C, Avila Pires B, Guo Z, Gheshlaghi Azar M et al (2020) Bootstrap your own latent-a new approach to self-supervised learning. Adv Neural Inf Process Syst 33:21271–21284
Friston K, Kiebel S (2009) Predictive coding under the free-energy principle. Philos Trans R Soc B Bioll Sci 364(1521):1211–1221
Friston K (2010) The free-energy principle: A unified brain theory? Nat Rev Neurosci 11(2):127–138
Jaegle A, Gimeno F, Brock A, Vinyals O, Zisserman A, Carreira J (2021) Perceiver: general perception with iterative attention. In: International conference on machine learning. PMLR, pp 4651–4664
Holmberg OG, Köhler ND, Martins T, Siedlecki J, Herold T, Keidel L, Asani B, Schiefelbein J, Priglinger S, Kortuem KU et al (2020) Self-supervised retinal thickness prediction enables deep learning from unlabelled data to boost classification of diabetic retinopathy. Nat Mach Intell 2(11):719–726
Arandjelovic R, Zisserman A (2017) Look, listen and learn. In: Proceedings of the IEEE international conference on computer vision, pp 609–617
Arandjelovic R, Zisserman A (2018) Objects that sound. In: Proceedings of the European conference on computer vision (ECCV), pp 435–451
Lee H-Y, Huang J-B, Singh M, Yang M-H (2017) Unsupervised representation learning by sorting sequences. In: Proceedings of the IEEE international conference on computer vision, pp 667–676
Misra I, van der Maaten L (2020) Self-supervised learning of pretext-invariant representations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6707–6717
Fernando B, Bilen H, Gavves E, Gould S (2017) Self-supervised video representation learning with odd-one-out networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3636–3645
Wei D, Lim JJ, Zisserman A, Freeman WT (2018) Learning and using the arrow of time. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8052–8060
Gan C, Gong B, Liu K, Su H, Guibas LJ (2018) Geometry guided convolutional neural networks for self-supervised video representation learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5589–5597
Vondrick C, Pirsiavash H, Torralba A (2016) Generating videos with scene dynamics. Adv Neural Inf Process Syst 29
Zhao Y, Deng B, Shen C, Liu Y, Lu H, Hua X-S (2017) Spatio-temporal autoencoder for video anomaly detection. In: Proceedings of the 25th ACM international conference on multimedia, pp 1933–1941
Kim D, Cho D, Kweon IS (2019) Self-supervised video representation learning with space-time cubic puzzles. Proc AAAI Conf Artif Intell 33(01):8545–8552
Han T, Xie W, Zisserman A (2020) Self-supervised co-training for video representation learning. Adv Neural Inf Process Syst 33:5679–5690
Kong Q, Wei W, Deng Z, Yoshinaga T, Murakami T (2020) Cycle-contrast for self-supervised video representation learning. Adv Neural Inf Process Syst 33:8089–8100
Qian R, Meng T, Gong B, Yang M-H, Wang H, Belongie S, Cui Y (2021) Spatiotemporal contrastive video representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6964–6974
McCann B, Bradbury J, Xiong C, Socher R (2017) Learned in translation: contextualized word vectors. Adv Neural Inf Process Syst 30
Baevski A, Edunov S, Liu Y, Zettlemoyer L, Auli M (2019) Cloze-driven pretraining of self-attention networks. arXiv:1903.07785
Jiao X, Yin Y, Shang L, Jiang X, Chen X, Li L, Wang F, Liu Q (2019) Tinybert: distilling Bert for natural language understanding. arXiv:1909.10351
Baevski A, Auli M, Mohamed A (2019) Effectiveness of self-supervised pre-training for speech recognition. arXiv:1911.03912
Baevski A, Schneider S, Auli M (2019) vq-wav2vec: Self-supervised learning of discrete speech representations. arXiv:1910.05453
Zhang Y, Qin J, Park DS, Han W, Chiu C-C, Pang R, Le QV, Wu Y (2020) Pushing the limits of semi-supervised learning for automatic speech recognition. arXiv:2010.10504
Chung Y-A, Zhang Y, Han W, Chiu C-C, Qin J, Pang R, Wu Y (2021) W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. arXiv:2108.06209
Zhang Y, Park DS, Han W, Qin J, Gulati A, Shor J, Jansen A, Xu Y, Huang Y, Wang S et al (2021) Bigssl: exploring the frontier of large-scale semi-supervised learning for automatic speech recognition. arXiv:2109.13226
Chiu C-C, Qin J, Zhang Y, Yu J, Wu Y (2022) Self-supervised learning with random-projection quantizer for speech recognition. arXiv:2202.01855
Liu X, Zhang F, Hou Z, Mian L, Wang Z, Zhang J, Tang J (2021) Self-supervised learning: Generative or contrastive. IEEE Trans Knowl Data Eng
Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I et al (2019) Language models are unsupervised multitask learners. OpenAI Blog 1(8):9
Tran C, Bhosale S, Cross J, Koehn P, Edunov S, Fan A (2021) Facebook ai wmt21 news translation task submission. arXiv:2108.03265
Arivazhagan N, Bapna A, Firat O, Lepikhin D, Johnson M, Krikun M, Chen MX, Cao Y, Foster G, Cherry C et al (2019) Massively multilingual neural machine translation in the wild: findings and challenges. arXiv:1907.05019
Van Oord A, Kalchbrenner N, Kavukcuoglu K (2016) Pixel recurrent neural networks. In: International conference on machine learning. PMLR, pp 1747–1756
Van den Oord A, Kalchbrenner N, Espeholt L, Vinyals O, Graves A et al (2016) Conditional image generation with Pixelcnn decoders. Adv Neural Inf Process Syst 29
Rezende D, Mohamed S (2015) Variational inference with normalizing flows. In: International conference on machine learning. PMLR, pp 1530–1538
Yang G, Huang X, Hao Z, Liu M-Y, Belongie S, Hariharan B (2019) Pointflow: 3d point cloud generation with continuous normalizing flows. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 4541–4550
Vahdat A, Kautz J (2020) Nvae: a deep hierarchical variational autoencoder. Adv Neural Inf Process Syst 33:19667–19679
Chen M, Radford A, Child R, Wu J, Jun H, Luan D, Sutskever I (2020) Generative pretraining from pixels. In: International conference on machine learning. PMLR, pp 1691–1703
You J, Ying R, Ren X, Hamilton W, Leskovec J (2018) Graphrnn: generating realistic graphs with deep auto-regressive models. In: International conference on machine learning. PMLR, pp 5708–5717
Zhang L, Lin J, Shao H, Zhang Z, Yan X, Long J (2021) End-to-end unsupervised fault detection using a flow-based model. Reliab Eng Syst Saf 215:107805
Hinton GE, Zemel R (1993) Autoencoders, minimum description length and helmholtz free energy. Adv Neural Inf Process Syst 6
Japkowicz N, Hanson SJ, Gluck MA (2000) Nonlinear autoassociation is not equivalent to PCA. Neural Comput 12(3):531–545
Vincent P, Larochelle H, Bengio Y, Manzagol P-A (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning, pp 1096–1103
Rifai S, Vincent P, Muller X, Glorot X, Bengio Y (2011) Contractive auto-encoders: explicit invariance during feature extraction. In: ICML
Zhang R, Isola P, Efros AA (2017) Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1058–1067
Hinton GE, Krizhevsky A, Wang SD (2011) Transforming auto-encoders. In: International conference on artificial neural networks. Springer, pp 44–51
Wang F, Liu H (2021) Understanding the behaviour of contrastive loss. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2495–2504
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556
Gutmann M, Hyvärinen A (2010) Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, pp 297–304
Le-Khac PH, Healy G, Smeaton AF (2020) Contrastive representation learning: a framework and review. IEEE Access 8:193907–193934
Jaiswal A, Babu AR, Zadeh MZ, Banerjee D, Makedon F (2020) A survey on contrastive self-supervised learning. Technologies 9(1):2
Wu Z, Xiong Y, Yu SX, Lin D (2018) Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3733–3742
Velickovic P, Fedus W, Hamilton WL, Liò P, Bengio Y, Hjelm RD (2019) Deep graph infomax. ICLR (Poster) 2(3):4
Hjelm RD, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, Trischler A, Bengio Y (2018) Learning deep representations by mutual information estimation and maximization. arXiv:1808.06670
Bachman P, Hjelm RD, Buchwalter W (2019) Learning representations by maximizing mutual information across views. Adv Neural Inf Process Syst 32
Hassani K, Khasahmadi AH (2020) Contrastive multi-view representation learning on graphs. In: International conference on machine learning. PMLR, pp 4116–4126
Tschannen M, Djolonga J, Rubenstein PK, Gelly S, Lucic M (2019) On mutual information maximization for representation learning. arXiv:1907.13625
He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9729–9738
Noroozi M, Vinjimoor A, Favaro P, Pirsiavash H (2018) Boosting self-supervised learning via knowledge transfer. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9359–9367
Tian Y, Krishnan D, Isola P (2020) Contrastive multiview coding. In: European conference on computer vision. Springer, pp 776–794
Khosla P, Teterwak P, Wang C, Sarna A, Tian Y, Isola P, Maschinot A, Liu C, Krishnan D (2020) Supervised contrastive learning. Adv Neural Inf Process Syst 33:18661–18673
Singh B, Davis LS (2018) An analysis of scale invariance in object detection snip. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3578–3587
Purushwalkam S, Gupta A (2020) Demystifying contrastive self-supervised learning: invariances, augmentations and dataset biases. Adv Neural Inf Process Syst 33:3407–3418
Giorgi J, Nitski O, Wang B, Bader G (2020) Declutr: deep contrastive learning for unsupervised textual representations. arXiv:2006.03659
Fang H, Wang S, Zhou M, Ding J, Xie P (2020) Cert: contrastive self-supervised learning for language understanding. arXiv:2005.12766
Xie Q, Dai Z, Hovy E, Luong T, Le Q (2020) Unsupervised data augmentation for consistency training. Adv Neural Inf Process Syst 33:6256–6268
Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv:1301.3781
Gao T, Yao X, Chen D (2021) Simcse: simple contrastive learning of sentence embeddings. arXiv:2104.08821
Yan Y, Li R, Wang S, Zhang F, Wu W, Xu W (2021) Consert: a contrastive framework for self-supervised sentence representation transfer. arXiv:2105.11741
Rozsa A, Rudd EM, Boult TE (2016) Adversarial diversity and hard positive generation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 25–32
Ilharco G, Zellers R, Farhadi A, Hajishirzi H (2020) Probing Contextual Language Models for Common Ground with Visual Representations. https://doi.org/10.48550/arxiv.2005.00619
Sun C, Baradel F, Murphy K, Schmid C (2019) Learning video representations using contrastive bidirectional transformer. arXiv:1906.05743
Senocak A, Oh T-H, Kim J, Yang M-H, Kweon IS (2018) Learning to localize sound source in visual scenes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4358–4366
Senocak A, Oh T-H, Kim J, Yang M-H, Kweon IS (2019) Learning to localize sound sources in visual scenes: analysis and applications. IEEE Trans Pattern Anal Mach Intell 43(5):1605–1619
Qian R, Hu D, Dinkel H, Wu M, Xu N, Lin W (2020) Multiple sound sources localization from coarse to fine. In: European conference on computer vision. Springer, pp 292–308
Hu D, Nie F, Li X (2019) Deep multimodal clustering for unsupervised audiovisual learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9248–9257
Hu D, Qian R, Jiang M, Tan X, Wen S, Ding E, Lin W, Dou D (2020) Discriminative sounding objects localization via self-supervised audiovisual matching. Adv Neural Inf Process Syst 33:10077–10087
Hu D, Wang Z, Xiong H, Wang D, Nie F, Dou D (2020) Curriculum audiovisual learning. arXiv:2001.09414
Zhan X, Xie J, Liu Z, Ong Y-S, Loy CC (2020) Online deep clustering for unsupervised representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6688–6697
Tao Y, Takagi K, Nakata K (2021) Clustering-friendly representation learning via instance discrimination and feature decorrelation. arXiv:2106.00131
Tsai TW, Li C, Zhu J (2020) Mice: mixture of contrastive experts for unsupervised image clustering. In: International conference on learning representations
Hu Q, Wang X, Hu W, Qi G-J (2021) Adco: adversarial contrast for efficient learning of unsupervised representations from self-trained negative adversaries. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1074–1083
Chen X, Fan H, Girshick R, He K (2020) Improved baselines with momentum contrastive learning. arXiv:2003.04297
Kalantidis Y, Sariyildiz MB, Pion N, Weinzaepfel P, Larlus D (2020) Hard negative mixing for contrastive learning. Adv Neural Inf Process Syst 33:21798–21809
Robinson J, Chuang C-Y, Sra S, Jegelka S (2020) Contrastive learning with hard negative samples. arXiv:2010.04592
Sohn K (2016) Improved deep metric learning with multi-class n-pair loss objective. Adv Neural Inf Process Syst 29
Wu C, Wu F, Huang Y (2021) Rethinking infonce: How many negative samples do you need? arXiv:2105.13003
Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823
Wang X, Hua Y, Kodirov E, Hu G, Garnier R, Robertson NM (2019) Ranked list loss for deep metric learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5207–5216
Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10(2)
Chopra S, Hadsell R, LeCun Y (2005) Learning a similarity metric discriminatively, with application to face verification. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol 1. IEEE, pp 539–546
Hadsell R, Chopra S, LeCun Y (2006) Dimensionality reduction by learning an invariant mapping. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), vol 2. IEEE, pp 1735–1742
Oh Song H, Xiang Y, Jegelka S, Savarese S (2016) Deep metric learning via lifted structured feature embedding. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4004–4012
Goldberger J, Hinton G E, Roweis S, Salakhutdinov R R, “Neighbourhood components analysis,” Advances in neural information processing systems, vol. 17, (2004)
Ghojogh B, Karray F, Crowley M (2019) Fisher and kernel fisher discriminant analysis: tutorial. arXiv:1906.09436
Sun Z, Deng Z-H, Nie J-Y, Tang J (2019) Rotate: knowledge graph embedding by relational rotation in complex space. arXiv:1902.10197
Li Z, Ji J, Fu Z, Ge Y, Xu S, Chen C, Zhang Y (2021) Efficient non-sampling knowledge graph embedding. Proc Web Conf 2021:1727–1736
Peng X, Chen G, Lin C, Stevenson M (2021) Highly efficient knowledge graph embedding learning with orthogonal procrustes analysis. arXiv:2104.04676
Cheng JY, Goh H, Dogrusoz K, Tuzel O, Azemi E (2020) Subject-aware contrastive learning for biosignals. arXiv:2007.04871
Becker S, Hinton GE (1992) Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature 355(6356):161–163
Bromley J, Guyon I, LeCun Y, Säckinger E, Shah R (1993) Signature verification using a “siamese” time delay neural network. Adv Neural Inf Process Syst 6
Chi Z, Dong L, Wei F, Yang N, Singhal S, Wang W, Song X, Mao X-L, Huang H, Zhou M (2020) Infoxlm: an information-theoretic framework for cross-lingual language model pre-training. arXiv:2007.07834
Lample G, Conneau A (2019) Cross-lingual language model pretraining. arXiv:1901.07291
Wu Z, Wang S, Gu J, Khabsa M, Sun F, Ma H (2020) Clear: contrastive learning for sentence representation. arXiv:2012.15466
Wei J, Zou K (2019) Eda: easy data augmentation techniques for boosting performance on text classification tasks. arXiv:1901.11196
Liao D (2021) Sentence embeddings using supervised contrastive learning. arXiv:2106.04791
Arora S, Khandeparkar H, Khodak M, Plevrakis O, Saunshi N (2019) A theoretical analysis of contrastive unsupervised representation learning. arXiv:1902.09229
Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. Adv Neural Inf Process Syst 26
Simoulin A, Crabbé B (2021) Contrasting distinct structured views to learn sentence embeddings. In: European chapter of the association of computational linguistics (student)
Aroca-Ouellette S, Rudzicz F (2020) On losses for modern language models. arXiv:2010.01694
Sun S, Gan Z, Cheng Y, Fang Y, Wang S, Liu J (2020) Contrastive distillation on intermediate representations for language model compression. arXiv:2009.14167
Deng Y, Bakhtin A, Ott M, Szlam A, Ranzato M (2020) Residual energy-based models for text generation. arXiv:2004.11714
Lai C-I (2019) Contrastive predictive coding based feature for automatic speaker verification. arXiv:1904.01575
Zhang S, Yan J, Yang X (2020) Self-supervised representation learning via adaptive hard-positive mining
Huynh T, Kornblith S, Walter MR, Maire M, Khademi M (2022) Boosting contrastive self-supervised learning with false negative cancellation. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 2785–2795
Ermolov A, Siarohin A, Sangineto E, Sebe N (2021) Whitening for self-supervised representation learning. In: International conference on machine learning. PMLR, pp 3015–3024
Yao Y, Liu C, Luo D, Zhou Y, Ye Q (2020) Video playback rate perception for self-supervised spatio-temporal representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6548–6557
Bai Y, Fan H, Misra I, Venkatesh G, Lu Y, Zhou Y, Yu Q, Chandra V, Yuille A (2020) Can temporal information help with contrastive self-supervised learning? arXiv:2011.13046
Pan T, Song Y, Yang T, Jiang W, Liu W (2021) Videomoco: contrastive video representation learning with temporally adversarial examples. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11205–11214
Yang C, Xu Y, Dai B, Zhou B (2020) Video representation learning with visual tempo consistency. arXiv:2006.15489
Feichtenhofer C, Fan H, Malik J, He K (2019) Slowfast networks for video recognition. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6202–6211
Zhuang C, She T, Andonian A, Mark M S, Yamins D (2020) Unsupervised learning from video with deep neural embeddings. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9563–9572
Han T, Xie W, Zisserman A (2019) Video representation learning by dense predictive coding. In: Proceedings of the IEEE/CVF international conference on computer vision workshops
Han T, Xie W, Zisserman A (2020) Memory-augmented dense predictive coding for video representation learning. In: European conference on computer vision. Springer, pp 312–329
Lorre G, Rabarisoa J, Orcesi A, Ainouz S, Canu S (2020) Temporal contrastive pretraining for video action recognition. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 662–670
Caron M, Bojanowski P, Joulin A, Douze M (2018) Deep clustering for unsupervised learning of visual features. In: Proceedings of the European conference on computer vision (ECCV), pp 132–149
Zhuang C, Zhai A L, Yamins D (2019) Local aggregation for unsupervised learning of visual embeddings. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6002–6012
Li J, Zhou P, Xiong C, Hoi SC (2020) Prototypical contrastive learning of unsupervised representations. arXiv:2005.04966
Hjelm RD, Bachman P (2020) Representation learning with video deep infomax. arXiv:2007.13278
Xue F, Ji H, Zhang W, Cao Y (2020) Self-supervised video representation learning by maximizing mutual information. Signal Process Image Commun 88:115967
Wang J, Jiao J, Liu Y-H (2020) Self-supervised video representation learning by pace prediction. In: European conference on computer vision. Springer, pp 504–521
Knights J, Harwood B, Ward D, Vanderkop A, Mackenzie-Ross O, Moghadam P (2021) Temporally coherent embeddings for self-supervised video representation learning. In: 2020 25th international conference on pattern recognition (ICPR). IEEE, pp 8914–8921
Yao T, Zhang Y, Qiu Z, Pan Y, Mei T (2021) Seco: exploring sequence supervision for unsupervised representation learning. In: AAAI, vol 2, p 7
Tao L, Wang X, Yamasaki T (2020) Self-supervised video representation learning using inter-intra contrastive framework. In: Proceedings of the 28th ACM international conference on multimedia, pp 2193–2201
Wang J, Gao Y, Li K, Jiang X, Guo X, Ji R, Sun X (2021) Enhancing unsupervised video representation learning by decoupling the scene and the motion. In: AAAI, vol 1, no. 2, p 7
Afouras T, Owens A, Chung JS, Zisserman A (2020) Self-supervised learning of audio-visual objects from video. In: European conference on computer vision. Springer, pp 208–224
Miech A, Alayrac J-B, Smaira L, Laptev I, Sivic J, Zisserman A (2020) End-to-end learning of visual representations from uncurated instructional videos. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9879–9889
Tokmakov P, Hebert M, Schmid C (2020) Unsupervised learning of video representations via dense trajectory clustering. In: European conference on computer vision. Springer, pp 404–421
Dunbar E, Karadayi J, Bernard M, Cao X-N, Algayres R, Ondel L, Besacier L, Sakti S, Dupoux E (2020) The zero resource speech challenge 2020: discovering discrete subword and word units. arXiv:2010.05967
Glass J (2012) Towards unsupervised speech processing. In: 2012 11th international conference on information science, signal processing and their applications (ISSPA). IEEE, pp 1–4
Schatz T (2016) Abx-discriminability measures and applications. Ph.D. Dissertation, Université Paris 6 (UPMC)
Dunbar E, Cao XN, Benjumea J, Karadayi J, Bernard M, Besacier L, Anguera X, Dupoux E (2017) The zero resource speech challenge 2017. In: 2017 IEEE automatic speech recognition and understanding workshop (ASRU). IEEE, pp 323–330
Kawakami K, Wang L, Dyer C, Blunsom P, van der Oord A: Learning robust and multilingual speech representations. arXiv:2001.11128
Wang W, Tang Q, Livescu K (2020) Unsupervised pre-training of bidirectional speech encoders via masked reconstruction. In: ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6889–6893
Heck M, Sakti S, Nakamura S (2017) Feature optimized DPGMM clustering for unsupervised subword modeling: A contribution to zerospeech 2017. In: 2017 IEEE automatic speech recognition and understanding workshop (ASRU). IEEE, pp 740–746
Nandan A, Vepa J (2020) Language agnostic speech embeddings for emotion classification
Park DS, Chan W, Zhang Y, Chiu C-C, Zoph B, Cubuk ED, Le QV (2019) Specaugment: a simple data augmentation method for automatic speech recognition. arXiv:1904.08779
Shor J, Jansen A, Han W, Park D, Zhang Y (2021) Universal paralinguistic speech representations using self-supervised conformers. arXiv:2110.04621
Al-Tahan H, Mohsenzadeh Y (2021) Clar: contrastive learning of auditory representations. In: International conference on artificial intelligence and statistics. PMLR, pp 2530–2538
Saeed A, Grangier D, Zeghidour N (2021) Contrastive learning of general-purpose audio representations. In: ICASSP 2021-2021 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 3875–3879
Xia J, Wu L, Chen J, Hu B, Li SZ (2022) Simgrace: a simple framework for graph contrastive learning without data augmentation. arXiv:2202.03104
Wang T, Isola P (2020) Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In: International conference on machine learning. PMLR, pp 9929–9939
You Y, Chen T, Shen Y, Wang Z (2021) Graph contrastive learning automated. In: International conference on machine learning. PMLR, pp 12121–12132
Zeng J, Xie P (2020) Contrastive self-supervised learning for graph classification. arXiv:2009.05923
You Y, Chen T, Sui Y, Chen T, Wang Z, Shen Y (2020) Graph contrastive learning with augmentations. Adv Neural Inf Process Syst 33:5812–5823
Sun M, Xing J, Wang H, Chen B, Zhou J, “Mocl: Contrastive learning on molecular graphs with multi-level domain knowledge,” arXiv preprint arXiv:2106.04509, (2021)
Sun F-Y, Hoffmann J, Verma V, Tang J (2019) Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv:1908.01000
Zhu Y, Xu Y, Yu F, Liu Q, Wu S, Wang L (2021) Graph contrastive learning with adaptive augmentation. Proc Web Conf 2021:2069–2080
Xia J, Wu L, Chen J, Wang G, Li SZ (2021) Debiased graph contrastive learning. arXiv:2110.02027
Alayrac J-B, Recasens A, Schneider R, Arandjelović R, Ramapuram J, De Fauw J, Smaira L, Dieleman S, Zisserman A (2020) Self-supervised multimodal versatile networks. Adv Neural Inf Process Syst 33:25–37
Liu Y, Yi L, Zhang S, Fan Q, Funkhouser T, Dong H (2020) P4contrast: contrastive learning with pairs of point-pixel pairs for RGB-D scene understanding. arXiv:2012.13089
Chuang C-Y, Robinson J, Lin Y-C, Torralba A, Jegelka S (2020) Debiased contrastive learning. Adv Neural Inf Process Syst 33:8765–8775
Ho C-H, Nvasconcelos N (2020) Contrastive learning with adversarial examples. Adv Neural Inf Process Syst 33:17081–17093
Tian Y, Sun C, Poole B, Krishnan D, Schmid C, Isola P (2020) What makes for good views for contrastive learning? Adv Neural Inf Process Syst 33:6827–6839
Wu M, Zhuang C, Mosse M, Yamins D, Goodman N (2020) On mutual information in contrastive learning for visual representations. arXiv:2005.13149
Asano Y, Patrick M, Rupprecht C, Vedaldi A (2020) Labelling unlabelled videos from scratch with multi-modal self-supervision. Adv Neural Inf Process Syst 33:4660–4671
Morgado P, Vasconcelos N, Misra I (2021) Audio-visual instance discrimination with cross-modal agreement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 12475–12486
Patrick M, Asano YM, Kuznetsova P, Fong R, Henriques JF, Zweig G, Vedaldi A (2020) Multi-modal self-supervision from generalized data transformations. arXiv:2003.04298
Xiao F, Lee YJ, Grauman K, Malik J, Feichtenhofer C (2020) Audiovisual slowfast networks for video recognition. arXiv:2001.08740
Gan C, Huang D, Zhao H, Tenenbaum JB, Torralba A (2020) Music gesture for visual sound separation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10478–10487
Yang K, Russell B, Salamon J (2020) Telling left from right: learning spatial correspondence of sight and sound. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9932–9941
Lin Y-B, Tseng H-Y, Lee H-Y, Lin Y-Y, Yang M-H (2021) Unsupervised sound localization via iterative contrastive learning. arXiv:2104.00315
Nagrani A, Chung JS, Albanie S, Zisserman A (2020) Disentangled speech embeddings using cross-modal self-supervision. In: ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6829–6833
Li B, Zhou H, He J, Wang M, Yang Y, Li L (2020) On the sentence embeddings from pre-trained language models. arXiv:2011.05864
Reimers N, Gurevych I (2019) Sentence-Bert: sentence embeddings using Siamese Bert-networks. arXiv:1908.10084
Jain P, Jain A, Zhang T, Abbeel P, Gonzalez JE, Stoica I (2020) Contrastive code representation learning. arXiv:2007.04973
Bui N D, Yu Y, Jiang L (2021) Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations. In: Proceedings of the 44th International ACM SIGIR conference on research and development in information retrieval, pp 511–521
Li Y, Hu P, Liu Z, Peng D, Zhou JT, Peng X (2021) Contrastive clustering. In: 2021 AAAI conference on artificial intelligence (AAAI)
Lin Y, Gou Y, Liu Z, Li B, Lv J, Peng X (2021) Completer: incomplete multi-view clustering via contrastive prediction. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11174–11183
Pan E, Kang Z (2021) Multi-view contrastive graph clustering. Adv Neural Inf Process Syst 34
Trosten DJ, Lokse S, Jenssen R, Kampffmeyer M (2021) Reconsidering representation alignment for multi-view clustering. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1255–1265
Wu L, Lin H, Tan C, Gao Z, Li SZ (2021) Self-supervised learning on graphs: contrastive, generative, or predictive. IEEE Trans Knowl Data Eng
Bhattacharjee A, Karami M, Liu H (2022) Text transformations in contrastive self-supervised learning: a review. arXiv:2203.12000
Albelwi S (2022) Survey on self-supervised learning: auxiliary pretext tasks and contrastive learning methods in imaging. Entropy 24(4):551
Stephane A-O, Frank R (2020) On losses for modern language models. arXiv:2010.01694
Acknowledgements
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
PK has written and organised the paper. PR has formatted and presented the digrammatic representations. SC has supervised the work.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Yes.
Code availability
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kumar, P., Rawat, P. & Chauhan, S. Contrastive self-supervised learning: review, progress, challenges and future research directions. Int J Multimed Info Retr 11, 461–488 (2022). https://doi.org/10.1007/s13735-022-00245-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13735-022-00245-6