Similarity contrastive estimation for image and video soft contrastive self-supervised learning

Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks. Source code is available here: https://github.com/juliendenize/eztorch.


Introduction
Self-Supervised learning (SSL) is an unsupervised learning procedure in which the data provides its own supervision to learn a practical representation of the data.A pretext task is designed to make this supervision.The pretrained model is then fine-tuned on downstream tasks and several works have shown that a self-supervised pretrained network can outperform its supervised counterpart for image (Caron et al, 2020;Grill et al, 2020;Caron et al, 2021) and video (Feichtenhofer et al, 2021;Duan et al, 2022).It has been successfully applied to various image and video applications such as image classification, action classification, object detection and action localization.
Contrastive learning is a state-of-the-art selfsupervised paradigm based on Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning 2010) whose most successful applications rely on instance discrimination (He et al, 2020;Chen et al, 2020a;Yang et al, 2020;Han et al, 2020a).Pairs of views from same images or videos are generated by carefully designed data augmentations (Chen et al, 2020a;Tian et al, 2020b;Feichtenhofer et al, 2021).Elements from the same pairs are called positives and their representations are pulled together to learn view invariant features.Other instances called negatives are considered as noise and their representations are pushed away from positives.Frameworks based on contrastive learning paradigm require a procedure to sample positives and negatives to learn a good data representation.Videos add the time dimension that offers more possibilities than images to generate positives such as sampling different clips as positives (Feichtenhofer et al, 2021;Qian et al, 2021b), using different temporal context (Pan et al, 2021;Recasens et al, 2021;Dave et al, 2022).
A large number of negatives is essential (van den Oord et al, 2018) and various strategies have been proposed to enhance the number of negatives (Chen et al, 2020a;Wu et al, 2018;He et al, 2020;Kalantidis et al, 2020).Sampling hard negatives (Kalantidis et al, 2020;Robinson et al, 2021;Wu et al, 2021;Hu et al, 2021b;Dwibedi et al, 2021) improve the representations but can be harmful if they are semantically false negatives which causes the "class collision problem" (Cai et al, 2020;Wei et al, 2021;Chuang et al, 2020).
Other approaches that learn from positive views without negatives have been proposed by predicting pseudo-classes of different views (Caron et al, 2020(Caron et al, , 2021;;Toering et al, 2022), minimizing the feature distance of positives (Grill et al, 2020;Chen and He, 2021;Feichtenhofer et al, 2021) or matching the similarity distribution between views and other instances (Zheng et al, 2021b).These methods free the mentioned problem of sampling hard negatives.
Based on the weakness of contrastive learning using negatives, we introduce a self-supervised soft contrastive learning approach called Similarity Contrastive Estimation (SCE), that contrasts positive pairs with other instances and leverages the push of negatives using the inter-instance similarities.Our method computes relations defined as a sharpened similarity distribution between augmented views of a batch.Each view from the batch is paired with a differently augmented query.Our objective function will maintain for each query the relations and contrast its positive with other images or videos.A memory buffer is maintained to produce a meaningful distribution.Experiments on several datasets show that our approach outperforms our contrastive and relational baselines MoCov2 (Chen et al, 2020c) and ReSSL (Zheng et al, 2021b) on images.We also demonstrate using relations for video representation learning is better than contrastive learning.
Our contributions can be summarized as follows: • We propose a self-supervised soft contrastive learning approach called Similarity Contrastive Estimation (SCE) that contrasts pairs of augmented instances with other instances and maintains relations among instances for either image or video representation learning.• We demonstrate that SCE outperforms on several benchmarks its baselines MoCov2 (Chen et al, 2020c) and ReSSL (Zheng et al, 2021b) on images on the same architecture.• We show that our proposed SCE is competitive with the state of the art on the ImageNet linear evaluation protocol and generalizes to several image downstream tasks.• We show that our proposed SCE reaches stateof-the-art results for video representation learning by pretraining on the Kinetics400 dataset as we beat or match previous top-1 accuracy for finetuning on HMDB51 and UCF101 for ResNet3D-18 and ResNet3D-50.We also demonstrate it generalizes to several video downstream tasks.
2 Related Work 2.1 Image Self-Supervised Learning Early Self-Supervised Learning.In early works, different pretext tasks to perform Self-Supervised Learning have been proposed to learn a good data representation.They consist in transforming the input data or part of it to perform supervision such as: instance discrimination (Dosovitskiy et al, 2016), patch localization (Doersch et al, 2015), colorization (Zhang et al, 2016), jigsaw puzzle (Noroozi and Favaro, 2016), counting (Noroozi et al, 2017), angle rotation prediction (Gidaris et al, 2018).
Contrastive Learning.Contrastive learning is a learning paradigm (van den Oord et al, 2018;Wu et al, 2018;Hjelm et al, 2019;Tian et al, 2020a;He et al, 2020;Chen et al, 2020a;Misra and van der Maaten, 2020;Tian et al, 2020b;Caron et al, 2020;Grill et al, 2020;Dwibedi et al, 2021;Hu et al, 2021b;Wang et al, 2021) that outperformed previously mentioned pretext tasks.Most successful methods rely on instance discrimination with a positive pair of views from the same image contrasted with all other instances called negatives.Retrieving lots of negatives is necessary for contrastive learning (van den Oord et al, 2018) and various strategies have been proposed.MoCo(v2) (He et al, 2020;Chen et al, 2020c) uses a small batch size and keeps a high number of negatives by maintaining a memory buffer of representations via a momentum encoder.Alternatively, SimCLR (Chen et al, 2020a,b) and MoCov3 (Chen et al, 2021b) use a large batch size without a memory buffer, and without a momentum encoder for SimCLR.
Sampler for Contrastive Learning.All negatives are not equal (Cai et al, 2020) and hard negatives, negatives that are difficult to distinguish with positives, are the most important to sample to improve contrastive learning.However, they are potentially harmful to the training because of the "class collision" problem (Cai et al, 2020;Wei et al, 2021;Chuang et al, 2020).Several samplers have been proposed to alleviate this problem such as using the nearest neighbor as positive for NNCLR (Dwibedi et al, 2021).Truncated-triplet (Wang et al, 2021) optimizes a triplet loss using the k-th similar element as negative that showed significant improvement.It is also possible to generate views by adversarial learning as AdCo (Hu et al, 2021b) showed.
Contrastive Learning without negatives.Various siamese frameworks perform contrastive learning without the use of negatives to avoid the class collision problem.BYOL (Grill et al, 2020) trains an online encoder to predict the output of a momentum updated target encoder.SwAV (Caron et al, 2020) enforces consistency between online cluster assignments from learned prototypes.DINO (Caron et al, 2021) proposes a self-distillation paradigm to match distribution on pseudo class from an online encoder to a momentum target encoder.Barlow-Twins (Zbontar et al, 2021) aligns the cross-correlation matrix between two paired outputs to the identity matrix that VICReg (Bardes et al, 2022) stabilizes by adding an intra-batch decorrelation loss function.
Regularized Contrastive Learning.Several works regularize contrastive learning by optimizing a contrastive objective along with an objective that considers the similarities among instances.CO2 (Wei et al, 2021) adds a consistency regularization term that matches the distribution of similarity for a query and its positive.PCL (Li et al, 2021) and WCL (Zheng et al, 2021a) combines unsupervised clustering with contrastive learning to tighten representations of similar instances.
Relational Learning.Contrastive learning implicitly learns relations among instances by optimizing alignment and matching a prior distribution (Wang and Isola, 2020;Chen and Li, 2020).ReSSL (Zheng et al, 2021b) introduces an explicit relational learning objective by maintaining consistency of pairwise similarities between strong and weak augmented views.The pairs of views are not directly aligned which harms the discriminative performance.
In our work, we optimize a contrastive learning objective using negatives that alleviate class collision by pulling related instances.We do not use a regularization term but directly optimize a soft contrastive learning objective that leverages the contrastive and relational aspects.

Video Self-Supervised Learning
Video Self-Supervised Learning follows the advances of Image Self-Supervised Learning and often picked ideas from the image modality with adjustment and improvement to make it relevant for videos and make best use of it.
Pretext tasks.As for images, in early works several pretext tasks have been proposed on videos.Some were directly picked from images such as rotation (Jing and Tian, 2018), solving Jigsaw puzzles (Kim et al, 2019) but others have been designed specifically for videos.These specific pretext-tasks include predicting motion and appearance (Wang et al, 2019), the shuffling of frame (Lee et al, 2017;Misra et al, 2016) or clip (Xu et al, 2019;Jenni et al, 2020) order, predicting the speed of the video (Benaim et al, 2020;Yao et al, 2020).These methods have been replaced over time by more performing approaches that are less limited by a specific pretext task to learn a good representation.Recently, TransRank (Duan et al, 2022) introduced a new paradigm to perform temporal and spatial pretext tasks prediction on a clip relatively to other transformations to the same clip and showed promising results.
Contrastive Learning.Video Contrastive Learning (Han et al, 2020a;Lorre et al, 2020;Yang et al, 2020;Pan et al, 2021;Qian et al, 2021b,a;Feichtenhofer et al, 2021;Recasens et al, 2021;Sun et al, 2021;Dave et al, 2022) has been widely studied in the recent years as it gained interest after its better performance than standard pretext tasks in images.Several works studied how to form positive views from different clips (Han et al, 2020a;Qian et al, 2021b;Feichtenhofer et al, 2021;Pan et al, 2021) to directly apply contrastive methods from images.CVRL (Qian et al, 2021b) extended SimCLR to videos and propose a temporal sampler for creating temporally overlapped but not identical positive views which can avoid spatial redundancy.Also, Feichtenhofer et al (2021) extended SimCLR, MoCo, SwaV and BYOL to videos and studied the effect of using random sampled clips from a video to form views.They pushed further the study to sample several positives to generalize the Multi-crop procedure introduced for images by Caron et al (2020).Some works focused on combining contrastive learning and predicting a pretext task (Piergiovanni et al, 2020;Wang et al, 2020;Chen et al, 2021a;Hu et al, 2021b;Huang et al, 2021;Jenni and Jin, 2021).To help better represent the time dimension, several approaches were designed to use different temporal context width (Pan et al, 2021;Recasens et al, 2021;Dave et al, 2022) for the different views.
In our work, we propose a soft contrastive learning objective using only RGB frames that directly generalizes our approach from image with minor changes.To the best of our knowledge, we are the first to introduce the concept of soft contrastive learning using relations for video selfsupervised representation learning.

Methodology
In this section, we will introduce our baselines: MoCov2 (Chen et al, 2020c) for the contrastive aspect and ReSSL (Zheng et al, 2021b) for the relational aspect.We will then present our self-supervised soft contrastive learning approach called Similarity Contrastive Estimation (SCE).All these methods share the same architecture illustrated in Fig. 1a.We provide the pseudo-code of our algorithm in Appendix A.

Contrastive and Relational
Learning Siamese momentum methods based on Contrastive and Relational learning, such as MoCo (He et al, 2020) and ReSSL (Zheng et al, 2021b) respectively, produce two views of x, x 1 = t 1 (x) and x 2 = t 2 (x), from two data augmentation distributions T 1 and T 2 with t 1 ∼ T 1 and t 2 ∼ T 2 .For ReSSL, T 2 is a weak data augmentation distribution compared to T 1 to maintain relations.x 1 passes through an online network f s followed by a projector g s to compute z 1 = g s (f s (x 1 )).A parallel target branch containing a projector g t and an encoder f t computes z 2 = g t (f t (x 2 )).z 1 and z 2 are both l 2 -normalized.The online branch parameters θ s are updated by gradient (∇) descent to minimize a loss function L. The target branch parameters θ t are updated at each iteration by exponential moving average of the online branch parameters with the momentum value m, also called keep rate, to control the update such as: MoCo uses the InfoNCE loss, a similarity based function scaled by the temperature τ that maximizes agreement between the positive pair and push negatives away:   1a.A batch x of images is augmented with two different data augmentation distributions T 1 and T 2 to form x 1 = t 1 (x) and x 2 = t 2 (x) with t 1 ∼ T 1 and t 2 ∼ T 2 .The representation z 1 is computed through an online encoder f s , projector g s and optionally a predictor h s such as z 1 = h s (g s (f s (x 1 ))).A parallel target branch updated by an exponential moving average of the online branch, or ema, computes z 2 = g t (f t (x 2 )) with f t and g t the target encoder and projector.In the objective function of SCE illustrated in Fig. 1b, z 2 is used to compute the inter-instance target distribution by applying a sharp softmax to the cosine similarities between z 2 and a memory buffer of representations from the momentum branch.This distribution is mixed via a 1 − λ factor with a onehot label factor λ to form the target distribution.Similarities between z 1 and the memory buffer plus its positive in z 2 are also computed.The online distribution is computed via softmax applied to the online similarities.The objective function is the cross entropy between the target and the online distributions. .
(3) ReSSL computes a target similarity distribution s 2 , that represents the relations between weak augmented instances, and the distribution of similarity s 1 between the strongly augmented instances with the weak augmented ones.Temperature parameters are applied to each distribution: τ for s 1 and τ m for s 2 with τ > τ m to eliminate noisy relations.The loss function is the cross-entropy between s 2 and s 1 : A memory buffer of size M >> N filled by z 2 is maintained for both methods.

Similarity Contrastive Estimation
Contrastive Learning methods damage relations among instances which Relational Learning correctly build.However Relational Learning lacks the discriminating features that contrastive methods can learn.If we take the example of a dataset composed of cats and dogs, we want our model to be able to understand that two different cats share the same appearance but we also want our model to learn to distinguish details specific to each cat.Based on these requirements, we propose our approach called Similarity Contrastive Estimation (SCE).We argue that there exists a true distribution of similarity w * i between a query q i and the instances in a batch of N images x = {x k } k∈{1,...,N } , with x i a positive view of q i .If we had access to w * i , our training framework would estimate the similarity distribution p i between q i and all instances in x, and minimize the cross-entropy between w * i and p i which is a soft contrastive learning objective: L SCE * is a soft contrastive approach that generalizes InfoNCE and ReSSL objectives.InfoNCE is a hard contrastive loss that estimates w * i with a one-hot label and ReSSL estimates w * i without the contrastive component.
We propose an estimation of w * i based on contrastive and relational learning.We consider x 1 = t 1 (x) and x 2 = t 2 (x) generated from x using two data augmentations t 1 ∼ T 1 and t 2 ∼ T 2 .Both augmentation distributions should be different to estimate different relations for each view as shown in Sec.4.1.1.We compute z 1 = h s (g s (f s (x 1 ))) from the online encoder f s , projector g s and optionally a predictor h s (Grill et al, 2020;Chen et al, 2021b)).We also compute z 2 = g t (f t (x 2 )) from the target encoder f t and projector g t .z 1 and z 2 are both l 2 -normalized.The similarity distribution s 2 i that defines relations between the query and other instances is computed via the Eq. ( 5).The temperature τ m sharpens the distribution to only keep relevant relations.A weighted positive one-hot label is added to s 2 i to build the target similarity distribution w 2 i : The online similarity distribution p 1 i between z 1 i and z 2 , including the target positive representation in opposition with ReSSL, is computed and scaled by the temperature τ with τ > τ m to build a sharper target distribution: The objective function illustrated in Fig. 1b is the cross-entropy between each w 2 and p 1 : The loss can be symmetrized by passing x 1 and x 2 through the momentum and online encoders and averaging the two losses computed.
A memory buffer of size M >> N filled by z 2 is maintained to better approximate the similarity distributions.
The following proposition explicitly shows that SCE optimizes a contrastive learning objective while maintaining inter-instance relations: Proposition 1. L SCE defined in Eq. ( 10) can be written as: .
The proof separates the positive term and negatives.It can be found in Appendix B. L Ceil leverages how similar the positives should be with hard negatives.Because our approach is a soft contrastive learning objective, we optimize the formulation in Eq. ( 10) and have the constraint µ = η = 1 − λ.It frees our implementation from having three losses to optimize with two hyperparameters µ and η to tune.Still, we performed a small study of the objective defined in Eq. ( 11) without this constraint to check if L Ceil improves results in Sec.4.1.1.Table 2: Effect of varying λ on the Top-1 accuracy on ImageNet100.The optimal λ is in [0.4,0.5] confirming that learning to discriminate and maintaining relations is best.

Empirical study
In this section, we will empirically prove the relevance of our proposed Similarity Contrastive Estimation (SCE) self-supervised learning approach to learn a good data representation for both images and videos representation learning.

Image study
In this section, we first make an ablative study of our approach SCE to find the best hyperparameters on images.Secondly, we compare SCE with its baselines MoCov2 (Chen et al, 2020c) and ReSSL (Zheng et al, 2021b) for the same architecture.Finally, we evaluate SCE on the ImageNet Linear evaluation protocol and assess its generalization capacity on various tasks.

Ablation study
To make the ablation study, we conducted experiments on ImageNet100 that has a close distribution to ImageNet, studied in Sec.4.1.3,with the advantage to require less resources to train.We keep implementation details close to ReSSL (Zheng et al, 2021b) and MoCov2 (Chen et al, 2020c) to ensure fair comparison.
Dataset.ImageNet (Deng et al, 2009) is a large dataset with 1k classes, almost 1.3M images in the training set and 50K images in the validation set.ImageNet100 is a selection of 100 classes from ImageNet whose classes have been selected randomly.We took the selected classes from (Tian et al, 2020a) referenced in Appendix C.
Implementation details for pretraining.We use the ResNet-50 (He et al, 2016) encoder and pretrain for 200 epochs.We apply by default strong and weak data augmentations defined in Tab. 1.We do not use a predictor and we do not symmetry the loss by default.Specific hyperparameter details can be found in Appendix D.1.
Evaluation protocol.To evaluate our pretrained encoders, we train a linear classifier following (Chen et al, 2020c;Zheng et al, 2021b) that is detailed in Appendix D.1.
Leveraging contrastive and relational learning.SCE defined in Eq. ( 8) leverages contrastive and relational learning via the λ coefficient.We studied the effect of varying the λ coefficient on ImageNet100.Temperature parameters are set to τ = 0.1 and τ m = 0.05.We report the results in Tab. 2. Performance increases with λ from 0 to 0.5 after which it starts decreasing.The best λ is inside [0.4,0.5] confirming that balancing the contrastive and relational aspects provides better representation.In next experiments, we keep λ = 0.5.We performed a small study of the optimization of Eq. ( 11) by removing L ceil (η = 0) to validate the relevance of our approach for τ = 0.1 and τ m ∈ {0.05, 0.07}.The results are reported in Tab. 3. Adding the term L ceil consistently improves performance, empirically proving that our approach is better than simply adding L Inf oN CE and L ReSSL .This performance boost varies with temperature parameters and our best setting improves by +0.9 percentage points (p.p.) in comparison with adding the two losses.
Asymmetric data augmentations to build the similarity distributions.Contrastive learning approaches use strong data augmentations (Chen et al, 2020a)  features and prevent the model to collapse.However, these strong data augmentations shift the distribution of similarities among instances that SCE uses to approximate w * i in Eq. ( 8).We need to carefully tune the data augmentations to estimate a relevant target similarity distribution.We listed different distributions of data augmentations in Tab. 1.The weak and strong augmentations are the same as described by ReSSL (Zheng et al, 2021b).strong-α and strong-β have been proposed by BYOL (Grill et al, 2020).strong-γ combines strong-α and strong-β.
We performed a study in Tab. 4 on which data augmentations are needed to build a proper target distribution for the non-symmetric and symmetric settings.We report the Top-1 accuracy on Imagenet100 when varying the data augmentations applied on the online and target branches of our pipeline.For the non-symmetric setting, SCE requires the target distribution to be built from a weak augmentation distribution that maintains consistency across instances.
Once the loss is symmetrized, asymmetry with strong data augmentations has better performance.Indeed, using strong-α and strong-β augmentations is better than using weak and strong augmentations, and same strong augmentations has lower performance.We argue symmetrized SCE requires asymmetric data augmentations to produce different relations for each view to make the model learn more information.The effect of using stronger augmentations is balanced by averaging the results on both views.Symmetrizing the loss boosts the performance as for (Grill et al, 2020;Chen and He, 2021).
Sharpening the similarity distributions.The temperature parameters sharpen the distributions of similarity exponentially.SCE uses the temperatures τ m and τ for the target and online similarity distributions with τ m < τ to guide the online encoder with a sharper target distribution.We made a temperature search on ImageNet100 by varying τ in {0.1, 0.2} and τ m in {0.03, ..., 0.10}.The results are in Tab. 5. We found the best values τ m = 0.07 and τ = 0.1 proving SCE needs a sharper target distribution.
In Appendix E, this parameter search is done for other datasets used in comparison with our baselines.Unlike ReSSL (Zheng et al, 2021b), SCE does not collapse when τ m → τ thanks to the contrastive aspect.Hence, it is less sensitive to the temperature choice.

Comparison with our baselines
We compared on 6 datasets how SCE performs against its baselines.We keep similar implementation details to ReSSL (Zheng et al, 2021b) and MoCov2 (Chen et al, 2020c) for fair comparison.
Small datasets.Cifar10 and Cifar100 (Krizhevsky and Hinton, 2009)  Implementation details.Architecture implementation details can be found in Appendix D.1.For MoCov2, we use τ = 0.2 and for ReSSL their best τ and τ m reported (Zheng et al, 2021b).For SCE, we use the best temperature parameters from Sec. 4.1.1 for ImageNet and ImageNet100 and from Appendix E for the other datasets.The same architecture for all methods is used except for MoCov2 on ImageNet that kept the ImageNet100 projector to improve results.
Results are reported in Tab. 6.Our baselines reproduction is validated as results are better than those reported by the authors.SCE outperforms its baselines on all datasets proving that our method is more efficient to learn discriminating features on the pretrained dataset.We observe that our approach outperforms more significantly ReSSL on smaller datasets than ImageNet, suggesting that it is more important to learn to discriminate among instances for these datasets.SCE has promising applications to domains with few data such as in medical applications.

ImageNet Linear Evaluation
We compare SCE on the widely used ImageNet linear evaluation protocol with the state of the art.We scaled our method using a larger batch size and a predictor to match state-of-the-art results (Grill et al, 2020;Chen et al, 2021b).
Implementation details.We use the ResNet-50 (He et al, 2016) encoder, apply strongα and strong-β augmentations defined in Tab. 1.We follow the same training hyperparameters used by (Chen et al, 2021b) and detailed in Appendix D.2.The loss is symmetrized and we keep the best hyperparameters from Sec. 4.1.1:λ = 0.5, τ = 0.1 and τ m = 0.07.
Multi-crop setting.We follow (Hu et al, 2021b)  Evaluation protocol.We follow the protocol defined by (Chen et al, 2021b) and detailed in Appendix D.2.
We evaluated SCE at epochs 100, 200, 300 and 1000 on the Top-1 accuracy on ImageNet to study the efficiency of our approach and compare it with the state of the art in Tab. 7. At 100 epochs, SCE reaches 72.1% up to 74.1% at 1000 epochs.Hence, SCE has a fast convergence and few epochs of training already provides a good representation.SCE is the Top-1 method at 100 epochs and Top-2 for 200 and 300 epochs proving the good quality of its representation for few epochs of pretraining.
At 1000 epochs, SCE is below several state-ofthe art results.We argue that SCE suffers from maintaining a λ coefficient to 0.5 and that relational or contrastive aspects do not have the same impact at the beginning and at the end of pretraining.A potential improvement would be using a scheduler on λ that varies over time.
We added multi-crop to SCE for 200 epochs of pretraining.It enhances the results but it is costly in terms of time and memory.It improves the results from 72.7% to our best result 75.4% (+2.7p.p.).Therefore, SCE learns from having local views and they should maintain relations to learn better representations.We compared SCE with state-of-the-art methods using multi-crop in Tab. 8. SCE is competitive with top state-of-theart methods that trained for 800+ epochs by having slightly lower accuracy than the best method using multi-crop (−0.3p.p) and without multicrop (−0.5p.p).SCE is more efficient than other methods, as it reaches state-of-the-art results for fewer pretraining epochs.(Doersch et al, 2015) 40.0 35.0 Rot-Pred (Gidaris et al, 2018) 40.0 34.9 NPID (Wu et al, 2018) 39.4 34.5 MoCo (He et al, 2020) 40.9 35.5 MoCov2 (Chen et al, 2020c) 40.9 35.5 SimCLR (Chen et al, 2020a) 39.6 34.6 BYOL (Grill et al, 2020) 40

Transfer Learning
We study the generalization of our proposed SCE on several tasks using our multi-crop checkpoint pretrained for 200 epochs on ImageNet.
Low-shot evaluation.Low-shot transferability of our backbone is evaluated on Pascal VOC2007.We followed the protocol proposed by Zheng et al (2021b).We select 16, 32, 64 or all images per class to train the classifier.Our results are compared with other state-of-the-art methods pretrained for 200 epochs in Tab.10.SCE is Top-1 for 32, 64 and all images per class and Top-2 for 16 images per class, proving the generalization of our approach to few-shot learning.
We report the performance of SCE in comparison with state-of-the-art methods in Tab. 9. SCE outperforms on 7 datasets all approaches.In average, SCE is above all state-of-the-art methods as well as the supervised baseline, meaning SCE is able to generalize to a wide range of datasets.
Object detection and instance segmentation.We performed object detection and instance segmentation on the COCO dataset (Lin et al, 2014).We used the pretrained network to initialize a Mask R-CNN (He et al, 2017) up to the C4 layer.We follow the protocol of Wang et al (2021) and report the Average Precision for detection AP Box and instance segmentation AP M ask .
We report our results in Tab.11 and observe that SCE is the second best method after Truncated-Triplet (Wang et al, 2021) on both metrics, by being slightly below their reported results and above the supervised setting.Therefore our proposed SCE is able to generalize to object detection and instance segmentation task beyond what the supervised pretraining can (+1.6p.p. of AP Box and +1.3p.p. of AP M ask ).

Video study
In this section, we first make an ablation study of our approach SCE to find the best hyperparameters on videos.Then, we compare SCE to the state of the art after pretraining on Kinetics400 and assess generalization on various tasks.

Ablation study
Pretraining Dataset.To make the ablation study, we perform pretraining experiments on Mini-Kinetics200 (Xie et al, 2018), later called Kinetics200 for simplicity.It is a subset of Kinet-ics400 (Kay et al, 2017) meaning they have a close distribution with less resources required on Kinet-ics200 to train.Kinetics400 is composed of 216k videos for training and 18k for validation for 400 action classes.However, it has been created from Youtube and some videos have been deleted.We use the dataset hosted 1 by the CVD foundation.
Evaluation Datasets.To study the quality of our pretrained representation, we perform linear evaluation classification on the Kinetics200 dataset.Also, we finetune on the first split of the UCF101 (Soomro et al, 2012) and HMDB51 (Kuehne et al, 2011) datasets.UCF101 is an action classification dataset that contains 13k3 different videos for 101 classes and has 3 different training and validation splits.HMDB51 is also an action classification dataset that contains 6k7 1 Link to the Kinetics400 dataset hosted by the CVD foundation: https://github.com/cvdfoundation/kinetics-dataset.Linear evaluation and finetuning evaluation protocols.We follow Feichtenhofer et al (2021) and details can be found in Appendix D.3.For finetuning on UCF101 and HMDB51 we only use the first split in ablation study.

Method
Baseline and supervised learning.We define an SCE baseline which uses the hyperparameters λ = 0.5, τ = 0.1, τ m = 0.07.We provide performance of our SCE baseline as well as supervised training in Tab.12.We observe that our baseline has lower results than supervised learning with −8.1p.p for Kinetics200, −1.2p.p for UCF101 and −3.1p.p for HMDB51 which shows that our representation has a large margin for improvement.
Leveraging contrastive and relational learning.As for the image study, we varied λ from the equation Eq. ( 8) in the set {0, 0.125, ..., 0.875, 1} to observe the effect of leveraging the relational and contrastive aspects and report results in Sec.4.2.1.Using relations during pretraining improves the results rather than only optimizing a contrastive learning objective.The performance on Kinetics200, UCF101 and HMDB51 consistently increases by decreasing λ from 1 to 0.25.The best λ obtained is 0.125.Moreover λ = 0 performs better than λ = 1.These results suggest that for video pretraining Table 14: Effect of varying τ m on the Top-1 accuracy on Kinetics200, UCF101 and HMDB51 while maintaining τ = 0.1.The best τ m is 0.05 meaning that a sharper target distribution is required.
with standard image contrastive learning augmentations, relational learning performs better than contrastive learning and leveraging both further improve the quality of the representation.Target temperature variation.We studied the effect of varying the target temperature with values in the set τ m ∈ {0.03, 0.04, ..., 0.08} while maintaining the online temperature τ = 0.1.We report results in Sec.4.2.1.We observe that the best temperature is τ m = 0.05 indicating that a sharper target distribution is required for video pretraining.We also observe that varying τ m has a lower impact on performance than varying λ.
Spatial and temporal augmentations.We tested varying and adding some data augmentations that generates the pairs of views.As we are dealing with videos, these augmentations can be either spatial or temporal.We define the jitter augmentation that jitters by a factor the duration of a clip, reverse that randomly reverses the order of frames and diff that randomly applies RGB difference on the frames.RGB difference consists in Table 16: Effect of using the temporal augmentations by applying clip duration jittering jitter, randomly reversing the order of frames reverse or randomly using RGB difference diff on the Kinetics200, UCF101 and HMDB51 Top-1 accuracy.The diff augmentation consistently improves results on the three benchmarks and outperforms supervised pretraining.The other augmentations unchange or decrease performance in average.
converting the frames to grayscale and subtracting them over time to approximate the magnitude of optical flow.In this work, we consider RGB difference as a data augmentation that is randomly applied during pretraining.In the literature it is often used as a modality to provide better representation quality than RGB frames (Jing and Tian, 2018;Lorre et al, 2020;Duan et al, 2022).
Here, we only apply it during pretraining as a random augmentation.Evaluation only sees RGB frames.
We tested to increase the color jittering strength in Tab. 15.Using a strength of 1.0 improved our performance on all the benchmarks suggesting that video pretraining requires harder spatial augmentations than images.
We tested our defined temporal augmentations with jitter of factor 0.2, meaning sampling clips between 0.80 × 2.56 and 1.20 × 2.56 seconds, randomly applying reverse with 0.2 probability and randomly applying diff with 0.2 or 0.5 Bringing all together.We studied varying one hyperparameter from our baseline and how it affects performance.In this final study, we combined our baseline with the different best hyperparameters found which are λ = 0.125, τ m = 0.05, color strength = 1.0 and applying diff with 0.2 probability.We report results in Tab. 17 and found out that using harder augmentations increased the optimal λ value as using λ = 0.5 performs better than λ = 0.125.This indicates that relational learning by itself cannot learn a better representation through positive views that share less mutual information.The contrastive aspect of our approach is proven efficient for such harder positives.We take as best configuration λ = 0.5, τ m = 0.05, diff applied with probability 0.2 and color strength = 1.0 as it provides best or second best results for all our benchmarks.It improves our baseline by +2.1p.p. on Kinetics200 and UCF101, and +5.0p.p. on HMDB51.It outperforms our supervised baseline by +0.9p.p. on UCF101 and +1.9p.p. on HMDB51.

Comparison with the State of the Art
Pretraining dataset.Pretraining implementation details.We use the ResNet3D-18 and ResNet3D-50 network (Hara et al, 2018) and more specifically the slow path of Feichtenhofer et al (2019).We kept the best hyperparameters from Sec. 4.2.1 which are λ = 0.5, τ m = 0.05, RGB difference with probability of 0.2, and color strength = 1.0 on top of the strong − α and strong − β augmentations.From the randomly sampled clips we specify if we keep 8 or 16 frames.Table 18: Performance of SCE for the linear evaluation protocol on Kinetics400 and finetuning on the three splits of UCF101 and HMDB51.Res p , Res e means the resolution for pretraining and evaluation.
T p , T e means the number of frames used for pretraining and evaluation.For Modality, "R" means RGB, "F" means Optical Flow, "RD" means RGB difference.Best viewed in color, gray rows highlight multi-modal trainings and green rows our results.SCE obtains state of the art results on ResNet3D-18 and on the finetuning protocol for ResNet3D-50.Table 19: Performance of SCE for video retrieval on the first split of UCF101 and HMDB51.Res p , Res e means the resolution for pretraining and evaluation.T p , T e means the number of frames used for pretraining and evaluation.We report the recall R@1, R@5, R@10.We obtain state of the art results for ResNet3D-18 on both benchmarks and further improve our results using the larger network ResNet3D-50.
Action recognition.We compare SCE on the linear evaluation protocol on Kinetics400 and finetuning on UCF101 and HMDB51.We kept the same implementation details as in Sec.4.2.1.We compare our results with the state of the art in Tab.18 on various architectures.To propose a fair comparison, we indicate for each approach the pretraining dataset, the number of frames and resolution used during pre-training as well as during evaluation.For the unknown parameters, we leave the cell empty.We compared with some approaches that used the other visual modalities Optical Flow and RGB difference and the different convolutional backbones S3D (Zhang et al, 2018) and R(2+1)D-18 (Tran et al, 2018).
On ResNet3D-18 even when comparing with methods using several modalities, by using 8×224 2 frames we obtain state-of-the-art results on the three benchmarks with 59.8% accuracy on Kinet-ics400, 90.9% on UCF101, 65.7% on HMDB51.Using 16 × 112 2 frames, which is commonly used with this network, improved by +0.9p.p on HMDB51 and decreased by −3.2p.p on kinet-ics400 and −1.8 on UCF101 and keep state of the art results on all benchmarks, except on UCF101 with −0.5p.p compared with Duan et al (2022) using RGB and RGB difference modalities.
On ResNet3D-50, we obtain state-of-the-art results using 16 × 224 2 frames on HMDB51 with 74.7% accuracy even when comparing with methods using several modalities.On UCF101, with 95.3% SCE is on par with the state of the art, −0.2p.p. than (Feichtenhofer et al, 2021), but on Kinetics400 −1.9p.p for 69.6%.We have the same computational budget as they use 4 views for pretraining.Using 8 frames decreased performance by −2.0p.p., −1.2p.p. and −4.2p.p on Kinetics400,UCF101 and HMDB51.It maintains results that outperform on the three benchmarks ρMoCo and ρBYOL with 2 views.It suggests that SCE is more efficient with fewer resources than these methods.By comparing our best with approaches on the S3D backbone that better fit smaller datasets, SCE has slightly lower performance than the state of the art: −1.0p.p. on UCF101 and −0.3p.p. on HMDB51.
Video retrieval.We performed video retrieval on our pretrained backbones on the first split of UCF101 and HMDB51.To perform this task, we extract from the training and testing splits the features using the 30-crops procedure as for action recognition, detailed in Appendix D.3.We query for each video in the testing split the N nearest neighbors (N ∈ {1, 5, 10}) in the  training split using cosine similarities.We report the recall R@N for the different N in Tab.19.We compare our results with the state of the art on ResNet3D-18.Our proposed SCE with 16×112 2 frames is Top-1 on UCF101 with 74.5%, 85.6% and 90.5% for R@1, R@5 and R@10.Using 8 × 224 2 frames slightly decreases results that are still state of the art.On HMDB51, SCE with 8 × 224 2 frames outperforms the state of the art with 40.1%, 63.3% and 75.4% for R@1, R@5 and R@10.Using 16 × 112 2 frames decreased results that are competitive with the previous state of the art approach (Park et al, 2022) for −2.3p.p., +1.5p.p. and −1.4p.p. on R@1, R@5 and R@10.
We provide results using the larger architecture ResNet3d-50 which increases our performance on both benchmarks and outperforms the state of the art on all metrics to reach 83.9%, 92.2% and 94.9% for R@1, R@5 and R@10 on UCF101 as well as 45.9%, 69.9% and 80.5% for R@1, R@5 and R@10 on HMDB51.Our soft contrastive learning approach makes our representation learn features that cluster similar instances even for generalization.
Generalization to downstream tasks.We follow the protocol introduced by Feichtenhofer et al (2021) to compare the generalization of our ResNet3d-50 backbone on Kinetics400, UCF101, AVA and SSv2 with ρSimCLR, ρSwAV, ρBYOL, ρMoCo and supervised learning in Tab.20.To ensure a fair comparison, we provide the number of views used by each method and the number of frames per view for pretraining and evaluation.
For 2 views and 8 frames, SCE is on par with ρMoCo with 3 views on Kinetics400, AVA and SSv2 but is worst than ρBYOL especially on AVA.For UCF101, results are better than ρMoCo and on par with ρBYOL.These results indicate that our approach proves more effective than contrastive learning as it reaches similar results than ρMoCo using one less view.Using 16 frames, SCE outperforms all approaches, including supervised training, on UCF101 and SSv2 but performs worse on AVA than ρByol and supervised training.This study shows that SCE can generalize to various video downstream tasks which is a criteria of a good learned representation.

Conclusion
In this paper we introduced a self-supervised soft contrastive learning approach called Similarity Contrastive Estimation (SCE).It contrasts pairs of asymmetrical augmented views with other instances while maintaining relations among instances.SCE leverages contrastive learning and relational learning and improves the performance over optimizing only one aspect.We showed that it is competitive with the state of the art on the linear evaluation protocol on ImageNet, on video representation learning and to generalize to several image and video downstream tasks.We proposed a simple but effective initial estimation of the true distribution of similarity among instances.An interesting perspective would be to propose a finer estimation of this distribution.2015), Hid for hidden, Dim for dimension, ema for the initial momentum value used to update the momentum branch.For BN: "no" means no batch normalization is used in the projector, "hid" means batch normalization after each hidden layer, "all" means batch normalization after the hidden layer and the output layer.

C Classes to construct ImageNet100
To build the ImageNet100 dataset, we used the classes shared by the CMC (Tian et al, 2020a) authors in the supplementary material of their publication.We also share these classes in Tab.22.

D Implementation details D.1 Ablation study and baseline comparison for images
Pretraining Implementation details.We use the ResNet-50 (He et al, 2016) encoder for large datasets and ResNet-18 for small and medium datasets with changes detailed below.We pretrain the models for 200 epochs.We apply by default strong and weak data augmentations, defined in Tab. 1 in the main paper, with the scaling range for the random resized crop set to (0.2, 1.0).Specific hyperparameters for each dataset for the projector construction, the size of the input, the size of the memory buffer, the initial momentum value, the initial learning rate, the batch size and the weight decay applied can be found in Tab. 21.We use the SGD optimizer (Sutskever et al, 2013) with a momentum of 0.9.A linear warmup is applied during 5 epochs to reach the initial learning rate.The learning rate is scaled using the linear scaling rule and follows the cosine decay scheduler without restart (Loshchilov and Hutter, 2017).The momentum value to update the target branch follows a cosine strategy from its initial value to reach 1 at the end of training.We do not symmetrize the loss by default.
Architecture change for small and medium datasets.Because the images are smaller, and ResNet is suitable for larger images, typically 224 × 224, we follow guidance from Sim-CLR (Chen et al, 2020a) and replace the first 7×7 Conv of stride 2 with a 3 × 3 Conv of stride 1.We also remove the first pooling layer.
Evaluation protocol.To evaluate our pretrained encoders, we train a linear classifier following (Chen et al, 2020c;Zheng et al, 2021b).We train for 100 epochs on top of the frozen pretrained encoder using an SGD optimizer with an initial learning rate of 30 without weight decay and a momentum of 0.9.A scheduler is applied to the learning rate that is decayed by a factor of 0.1 at 60 and 80 epochs.The data augmentations for the different datasets are: • training set for large datasets: random resized crop to resolution 224 × 224 with the scaling range set to (0.08, 1.0) and a random horizontal flip with a probability of 0.5.
• training set for small and medium datasets: random resized crop to the dataset resolution with a padding of 4 for small datasets and the scaling range set to (0.08, 1.0).Also, a random horizontal flip with a probability of 0.5 is applied.

D.2 Imagenet study
Pretraining implementation details.We use the ResNet-50 (He et al, 2016) encoder and apply strong-α and strong-β augmentations, defined in Tab. 1 in the main paper, with the scaling range for the random resized crop set to (0.2, 1.0).The batch size is set to 4096 and the memory buffer to 65,536.We follow the same training hyperparameters as (Chen et al, 2021b) for the architecture.Specifically, we use the same projector and predictor, the LARS optimizer (You et al, 2017) with a weight decay of 1.5 • 10 −6 for 1000 epochs of training and 10 −6 for fewer epochs.Bias and batch normalization (Ioffe and Szegedy, 2015) parameters are excluded.The initial learning rate is 0.5 for 100 epochs and 0.3 for more epochs.It is linearly scaled for 10 epochs and it follows the cosine annealed scheduler.The momentum value follows a cosine scheduler from 0.996 for 1000 epochs, 0.99 for fewer epochs, to reach 1 at the end of training.
Evaluation protocol.We follow the protocol defined by (Chen et al, 2021b).Specifically, we train a linear classifier for 90 epochs on top of the frozen encoder with a batch size of 1024 and a SGD optimizer with a momentum of 0.9 and without weight decay.The initial learning rate is 0.1 and scaled using the linear scaling rule and follows the cosine decay scheduler without restart (Loshchilov and Hutter, 2017).The data augmentations applied are: • training set: random resized crop to resolution 224 × 224 with the scaling range set to (0.08, 1.0) and a random horizontal flip with a probability of 0.5.• validation set: resize to resolution 256×256 and center crop to resolution 224 × 224.

D.3 Video study
Pretraining implementation details.We used the ResNet3D-18 and ResNet3D-50 networks (Hara et al, 2018) following the Slow path of Feichtenhofer et al (2019).The exact architecture details can be found in Tab.23.We kept the siamese architecture used for ImageNet in Sec.4.1.3and depending on the backbone and pretraining dataset, the projector and predictor architectures as well as the memory buffer size vary and are referenced in Tab.24.The LARS optimizer with a weight decay of 1.10  that lasts 2.56 seconds.For Kinetics it corresponds to 64 frames for a frame rate per second (FPS) of 25.Out of this clip we keep a number of frames specified in the main paper.By default, we sample two different clips to form positives and we apply the strong-α and strong-β augmentations, defined in Tab. 1 in the main paper, to the views.Linear evaluation protocol details.We follow Feichtenhofer et al (2021) and train a linear classifier for 60 epochs on top of the frozen encoder with a batch size of 512.We use the SGD optimizer with a momentum of 0.9 and without weight decay to reach the initial learning rate 2 that follows the linear scaling rule with the batch size set to 512.A linear warmup is applied during 35 epochs and then a cosine annealing scheduler.For training, we sample randomly a clip in the video and random crop to the size 224 × 224 after short scaling the video to 256.An horizontal flip is also applied with a probability of 0.5.For evaluation, we follow the standard evaluation protocol of Feichtenhofer et al (2019) and sample 10 temporal clips with 3 different spatial crops of size 256 × 256 applied to each temporal clip to cover the whole video.The final prediction is the mean average of the predictions of the 30 clips sampled.
Finetuning evaluation protocol details.We follow Feichtenhofer et al (2021) for finetuning on UCF101 and HMDB51.We finetune the whole pretrained network and perform supervised training on the 101 and 51 classes respectively for 200 epochs with dropout of probability 0.8 before classification.We use the SGD optimizer with a momentum of 0.9 and without weight decay to reach the initial learning rate 0.1 that follows the linear scaling rule with the batch size set to 64 and a cosine annealing scheduler without warmup.For training, we sample randomly a clip in the video and random crop to the size 224 × 224 after short scaling the video to 256.We apply color jittering with the strong augmentation parameters, defined in Tab. 1 in the main paper, and an horizontal flip with a probability of 0.5.For evaluation, we follow the 30-crops procedure as for linear evaluation.Specific hyperparameter search for each dataset might improve results.

E Temperature influence on small and medium datasets
We made a temperature search on CIFAR10, CIFAR100, STL10 and Tiny-ImageNet by varying τ in {0.1, 0.2} and τ m in {0.03, ..., 0.10}.The results are in Tab.25.As for ImageNet100, we need a sharper distribution on the output of the momentum encoder.Unlike ReSSL (Zheng et al, 2021b), SCE do not collapse when τ m → τ thanks to the contrastive aspect.For our baselines comparison in Sec.4.2, we use the best temperatures found for each dataset.

Fig. 1 :
Fig.1: SCE follows a siamese pipeline illustrated in Fig.1a.A batch x of images is augmented with two different data augmentation distributions T 1 and T 2 to form x 1 = t 1 (x) and x 2 = t 2 (x) with t 1 ∼ T 1 and t 2 ∼ T 2 .The representation z 1 is computed through an online encoder f s , projector g s and optionally a predictor h s such as z 1 = h s (g s (f s (x 1 ))).A parallel target branch updated by an exponential moving average of the online branch, or ema, computes z 2 = g t (f t (x 2 )) with f t and g t the target encoder and projector.In the objective function of SCE illustrated in Fig.1b, z 2 is used to compute the inter-instance target distribution by applying a sharp softmax to the cosine similarities between z 2 and a memory buffer of representations from the momentum branch.This distribution is mixed via a 1 − λ factor with a onehot label factor λ to form the target distribution.Similarities between z 1 and the memory buffer plus its positive in z 2 are also computed.The online distribution is computed via softmax applied to the online similarities.The objective function is the cross entropy between the target and the online distributions.

Table 1 :
(Grill et al, 2020))ons of data augmentations applied to SCE.The weak distribution is the same as ReSSL(Zheng et al, 2021b), strong is the standard contrastive data augmentation(Chen et al, 2020a).The strong-α and strong-β are two distributions introduced by BYOL(Grill et al, 2020).Finally, strong-γ is a mix between strong-α and strong-β.

Table 3 :
Effect of loss coefficients in Eq. (11) on the Top-1 accuracy on ImageNet100.L Ceil consistently improves performance that varies given the temperature parameters.

Table 4 :
Effect of using different distributions of data augmentations for the two views and of the loss symmetrization on the Top-1 accuracy on ImageNet100.Using a weak view for the teacher without symmetry is necessary to obtain good relations.With loss symmetry, asymmetric data augmentations improve the results, with the best obtained using strong-α and strong-β.

Table 5 :
to learn view invariant Effect of varying the temperature parameters τ m and τ on the Top-1 accuracy on ImageNet100.τ m is lower than τ to produce a sharper target distribution without noisy relations.SCE does not collapse when τ m → τ .

Table 7 :
setting and sample 6 different views detailed in Appendix D.2.Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning State-of-the-art results on the Top-1 Accuracy on ImageNet under the linear evaluation protocol at different pretraining epochs: 100, 200, 300, 800+.SCE is Top-1 at 100 epochs and Top-2 for 200 and 300 epochs.For 800+ epochs, SCE has lower performance than several state-of the-art methods.Results style: best, second best.

Table 8 :
State-of-the-art results on the Top-1 Accuracy on ImageNet under the linear evaluation protocol with multi-crop.SCE is competitive with the best state-of-the-art methods by pretraining for only 200 epochs instead of 800+.

Table 10 :
Transfer learning on low-shot image classification on Pascal VOC2007.All methods have been pretrained for 200 epochs.SCE is Top-1 when using 32-64-all images per class and Top-2 for 16 images.

Table 15 :
Effect of strength for color jittering for strong-α and strong-β augmentations on the Kinetics200, UCF101 and HMDB51 Top-1 accuracy.Strong color jittering improves performance.

Table 17 :
Effect of combining best hyper-parameters found in the ablation study which are λ = 0.125, τ m = 0.05, color strength= 1.0 and adding randomly time difference on the Kinetics200, UCF101 and HMDB51 Top-1 accuracy.Using time difference and stronger color jittering increases the optimal λ value which indicates contrastive learning is efficient to deal with harder views and helps relational learning.The best value τ m = 0.05 performs favorably for Kinetics200 and HMDB51.Results style: best, second best. 10

Table 20 :
Performance of SCE in comparison with Feichtenhofer et al (2021)for linear evaluation on Kinetics400 and finetuning on the first split of UCF101, AVA and SSv2.SCE is on par with ρMoCo for fewer views.Increasing the number of frames outperforms ρBYOL on Kinetics400, UCF101 and SSv2.

Table 21 :
Architecture and hyperparameters used for pretraining on the different datasets.LR stands for the initial learning rate, WD for weight decay, BN for batch normalization(Ioffe and Szegedy,

Table 22 :
The 100 classes selected from ImageNet to construct ImageNet100.
• validation set for large datasets: resize to resolution 256 × 256 and center crop to resolution 224 × 224.• validation set for small and medium datasets: resize to the dataset resolution.
Feichtenhofer et al (2021)nd bias parameters excluded, for 200 epoch of training is used.The learning rate follows a linear warmup until it reaches an initial value of 2.4 and then follows a cosine annealed scheduler.The initial learning rate is scaled following the linear scaling rule and the batch size is set to 512.The momentum value follows a cosine scheduler from 0.99 to 1 and the loss is symmetrized.To sample and crop different views from a video, we followFeichtenhofer et al (2021)and sample randomly different clips from the video