TLCE: Transfer-Learning Based Classifier Ensembles for Few-Shot Class-Incremental Learning

Few-shot class-incremental learning (FSCIL) struggles to incrementally recognize novel classes from few examples without catastrophic forgetting of old classes or overfitting to new classes. We propose TLCE, which ensembles multiple pre-trained models to improve separation of novel and old classes. TLCE minimizes interference between old and new classes by mapping old class images to quasi-orthogonal prototypes using episodic training. It then ensembles diverse pre-trained models to better adapt to novel classes despite data imbalance. Extensive experiments on various datasets demonstrate that our transfer learning ensemble approach outperforms state-of-the-art FSCIL methods.


Introduction
Deep learning has sparked substantial advancements in various computer vision tasks.These advancements are mainly due to the emergence of large-scale datasets and powerful GPU computing devices.However, deep learning-based methods exhibit limitations in recognizing classes that have not been incorporated into their training.In this scenario, there has been significant research conducted on Class-Incremental Learning (CIL), which focuses on dynamically updating the model using only new samples from each additional task, while preserving knowledge about previously learned classes.On the other hand, the process of obtaining and annotating a sufficient quantity of data samples presents challenges in both complexity and expense.Certain studies are dedicated to investigating CIL in situations where data availability is limited.Specifically, researchers have explored the concept of few-shot class-incremental learning (FSCIL), which aims to continuously learn new classes using only a limited number of target samples.
As a consequence, two issues arise: the potential for catastrophic forgetting of previously learned classes and the risk of overfitting to new concepts.Furthermore, Constrained Few-Shot Class-Incremental Leaning (C-FSCIL) [1] introduce that this particular learning approach abides by explicit constraints related to memory and computational capacity.These constraints include the necessity to maintain a consistent computational cost when acquiring knowledge about a new class and ensuring that the model's memory usage increases at most linearly as additional classes are introduced.
To solve the above issues, recent studies [2,3,4] focus on addressing these challenges by emphasizing the acquisition of transferable features through initially utilizing the cross-entropy (CE) loss during training in the base session, while also subsequently freezing the backbone to facilitate adaptation to new classes.C-FSCIL [1] employs meta-learning to map input images to quasi-orthogonal prototypes in a way that minimizes interference between the prototypes of different classes.Although C-FSCIL has demonstrated superior performance, we find a prediction bias arising from class imbalance and data imbalance.We also observe that the process of assigning hyperdimensional quasiorthogonal vectors to each class demands a substantial number of samples and iterations.This undoubtedly presents a challenge when it comes to allocating prototypes to novel classes that possess only a limited amount of samples.
In this paper, we propose TLCE, a transfer-learning based few-shot class-incremental learning method that ensembles various classifiers memorized different knowledge.One main inspiration is pretraining a deep network on the base dataset and transferring knowledge to the novel classes [5,6] has been shown as the strong baseline for the few-shot classification.On the other hand, little interference between the new classes and the old classes is key.Hence, we leverage the advantages offered by the aforementioned classifiers through ensemble learning.Firstly, we employ meta-learning to train a robust hyperdimensional network (RHD) according to C-FSCIL.This allows us to effectively map input images to quasi-orthogonal prototypes for base classes.Secondly, we integrate cosine similarity and cross-entropy loss to train a transferable knowledge network (TKN).Finally, we compute the prototype, i.e., the average of features, for each class.The classification of a test sample is simply determined by finding its nearest prototype measured by the weighted integration combines the different relationships.
Comparing to C-FSCIL, our TLCE adopts the similar idea of assigning quasi-orthogonal prototypes for base classes to reduce minimal interference.The key difference is the attempt to perform well on all classes equally, regardless of the training sequence employed through classifier ensembles.We conduct extensive comparisons with state-the-art few-shot classincremental classificaiton methods on miniImageNet [7] and CIFAR100 [8] and the results demonstrate the superiority of our TLCE.Ablation studies on different ensembles, i.e., different weights between the robust hyperdimensional network and transferable knowledge network also show the necessity of ensembling two classifiers for better results.
In summary, our contributions are as follows: 1. We propose TLCE, transfer-learning based classifier ensembles to improve the novel class set separation and maintain the base class set separation. 2. Without additional training and expensive computation, the proposed method can efficiently explore the comprehensive relation between prototypes and test features and improve the novel class set separation and maintain the base class set separation.

We conduct extensive experiments on various
datasets and the results show our efficient method can outperform SOTA few-shot class-incremental classification methods.

Related Work
Few-Shot Learning.
FSL seeks to develop neural models for new categories using only a small number of labeled samples.Meta-learning [9] is extensively utilized to accomplish few-shot classification.The core idea is to use the episodic training paradigm to learn generalizable classifiers or feature extractors for the data of the base classes in an optimization-based framework [10,11,12], as well as learn a distance function to measure the similarity among feature embeddings through metric-learning [13,14,15,16].On the other hand, pretraining classifiers or image encoders on the base dataset and then adapting them the novel classes via transfer learning [5,6] has been shown as the strong baseline for the few-shot classification.Based on the meta-learned feature extractor or the pretrained deep image model, we can perform nearest-neighbor (NN) based classification which has been proven as a simple and effective approach for FSL.Specially, the prediction is determined by measuring the similarity or distance between the test feature and the prototypes of the novel labeled features.Due to the limited number of samples, the prototypes computed from the few-shot novel class data may cannot represent the underlying data distribution.Several methods [17,18,19,20] have been proposed to perform data calibration to obtain better samples or prototypes of the novel class recently.Inspired by those representative few-shot methods, we attempt to leverage different training paradigms to acquire diverse models to calculate target prototypes for the few-shot class-incremental learning tasks.
Class Incremental Learning.CIL aims to build a universal classifier among all seen classes from a stream of labeled training sets.Current CIL algorithms can be roughly divided into three categories.The first category utilizes former data for rehearsal, which enables the model to review former instances and overcome forgetting [21,22,23].The second category estimates the importance of each parameter and keeps the important ones static [24,25,26].Other methods designs algorithms to maintain the model's knowledge and discriminability.For example, knowledge distillation-based methods build the mapping between old and new models [27,28,29].On the other hand, several methods aim to find bias and rectify them like the oracle model [30,31,19].FSCIL can be seen a particular case of the CIL.Therefore, we can learn from some of the above methods.
Few-Shot Class-Incremental Learning.FSCIL introduces few-shot scenarios where only a few labeled samples are available into the task of class-incremental learning.To achieve FSCIL, many works attempt to solve the problem of catastrophic forgetting and seri-ously overfitting from different perspective.TOPIC [32] employs a neural gas network to preserve the topology of the feature manifold from a cognitive-inspired perspective.SKD [33] and ERDIL [34] use knowledge distillation to to balance the preserving of old-knowledge and adaptation of new-knowledge.Feature-space based methods focus on obtaining compact clustered features and maintaining generalization for future incremental classes [35,36,37].From the perspective of parameter space, WaRP [38] combines the advantages of F2M [4] to find flat minimums of the loss function and FSLL [39] for parameter fine-tuning.They push most of the previous knowledge compactly into only a few important parameters so that they can fine-tune more parameters during incremental sessions.From the perspective of hybrid approaches, some works combine episodic training [3,40,1], ensemble learning [41,42], and so on.C-FSCIL [1] maps input images to quasi-orthogonal prototypes such that the prototypes of different classes encounter small interference through episodic training.However, achieving quasi-orthogonality among all prototypes for the classes poses difficulties when dealing with novel classes that have only a limited number of labeled samples.MCNet [41] trains multiple embedding networks using diverse network architectures to to enhance the diversity of models and enable them to memorize different knowledge effectively.Similar to the above method, our method is based on ensemble learning, while we train two shared architecture networks using different loss function and training methods.[42] enhances the expression ability of extracted features through multistage pre-training and uses metalearning process to extract meta-feature as complementary features.Please note that a novel generalization model is one with no overlapping among novel class sets and no interference with base classes.In contrast to these methods, we ensemble a robust hyperdimensional (HD) network for base classes and a trasnferable knowledge network for novel classes from a whole new perspective.

Method
In this section, we propose the FSCIL method using model ensemble.An ideal FSCIL learning model should ensure that the newly added categories do not interfere with the old ones and maintain a distinct separation between them.The motivations mentioned above prompt us to solve the aforementioned problems by combining a robust hyperdimensional memoryaugmented neural network and a transferable knowledge model through ensemble.Firstly, we draw in-spiration from [43,1] and employ episodic training to map the base datasets to quasi-orthogonal prototypes, thereby minimizing interference of base classes during incremental sessions.Secondly, we pretrain a model from scratch in a standard supervised way to gain transferable knowledge space.Finally, we have integrated explicit memory (EM) into the previously mentioned embedding networks.This has been done in a manner that allows the EM to store the embeddings of labeled data samples as class prototypes within its memory.During the testing process, we utilize the nearest prototype classification method based on similarity thereby meeting the classification requirements for all seen classes.Note that we only need to compute the new class prototypes using the aforementioned models and update the EM because training only takes place within the base session.Figure 1 demonstrates the framework of our method.In the following, we provide technical details of the proposed method for few-shot classincremental classification.

Problem Statement
FSCIL learn continuously from a sequential stream of tasks.Suppose we have a constant stream of labeled training data denoting D 1 , D 2 , . . ., D T , where i=1 .In the t-th task, the set of labels is denoted as C t , where where ∀i j, C i ∩ C j = ∅.The total number of classes in this task is represented by |C t |.D 1 with ample data is called the base session, while D t (where t > 1) pertains to the limited training set involving new classes (called incremental session).We follow the conventional few-shot class-incremental learning setting, i.e., build a series of N-way K-shot datasets D t = {(x i , y i )} N×K i=1 , where N is the number of novel classes and K is the number of port samples in each novel session.For each session t, the model only have access to the dataset D t .After training with D t , the model needs to recognize all encountered classes in ∪ s≤t C s .

Robust Hyperdimensinal Network (RHD)
Due to the "curse" of dimensionality, a randomly selected vector has a high probability of being quasiorthogonal to other random vectors.As a result, when representing a novel class, the process not only contributes incrementally to previous learning but also causes minimal interference.Hence, we follow C-FSCIL [1] to build a RHD network during the base session.
Our method is comprised of three primary components: a backbone network , an extra projection, and a fully connected layer.The backbone network maps the samples from the input domain X to a feature space.In order to construct an embedding network that utilizes a high-dimensional distributed representation, the backbone network is joined with a projection layer.Then we have where µ 1 ∈ R d f is the intermediate feature of input x, d f is the dimension of the feature space, µ 2 ∈ R d is the output feature of the intermediate feature µ 1 , and θ 1 , θ 2 are the learnable parameters of the backbone network and the projection layer, respectively.Firstly, we jointly train both F θ 1 and G θ 2 from scratch in the standard supervised classification using the base session data to derive powerful embeddings for the downstream base learner.The empirical risk to minimize can be formulated as: where L ce (•, •) is cross-entropy loss (CE) and W T is the learnable parameters of the fully connected layer.Lastly, we build on top of the meta-learning setup to allocate nearly quasiorthogonal vectors to various image classes.These vectors are then positioned far away from each other in the hyperdimensional space.We replace the fully connected layer with the EM and build a series of |D 1 |-way K-shot tasks where |D 1 | is the number of base classes and K is the number of support samples in each task.In every task, the projection layer produces a support vector for every training input.To represent each class, we calculate the average of all support vectors that belong to a specific class, thereby generating a single prototype vector for that class.Within the EM, prototypes are saved for each class.Specifically, the prototype for a given class i is determined in the following manner: where S i is the set of all samples from class i and |S c | is the number of samples.Given a query sample q and prototypes, we compute the cosine similarity for class i as follows: where tanh(•) is the hyperbolic tangent function and cos(•, •) is the cosine similarity.In hyperdimensional memory-augmented neural networks [43], the hyperbolic tangent has demonstrated its usefulness as a nonlinear function that regulates the activated prototypes' norms and embedding outputs.Additionally, cosine similarity tackles the norm and bias problems commonly encountered in FSCIL by emphasizing the angle between activated prototypes and embedding outputs while disregarding their norms [44].Given the cosine similarity score S R i for every class i, we utilize a soft absolute sharpening function to enhance this attention vector, resulting in quasi-orthogonal vectors [43].Softabs attention The softabs attention function is defined as where ϵ(•) is the sharpening function: The sharpening function includes a stiffness parameter β, which is set to 10 as in [43].

Transferable Knowledge Network (TKN)
It is difficult to ensure quasi-orthogonality among all prototypes for each class due to the presence of novel classes that only have a small number of labeled samples.Inspired by transfer learning based few-shot methods, we explore various transferable models.The most straightforward approach involves utilizing a model that has been pre-trained from the scratch using standard supervised classification techniques.We employ this model as a baseline for our analysis.
In SimpleShot [45], it demonstrates that using nearest neighbor classification, where features are simply normalized by L2 norm and measured by Euclidean distance, can obtain competitive results in few-shot classification tasks.The squared Euclidean distance after L2 normalization is equivalent to cosine similarity.Utilizing cosine similarity as a distance metric for quantifying data similarity has two implications: 1) during training, it focuses on the angles between normalized features rather than the absolute distances within the latent feature space, and 2) the normalized weight parameters of the fully connected layer can be interpreted as the centroids or centers of each category [36].So we combine cosine similarity with cross-entropy loss to train a more transferable network.To simplify calculations of cosine similarity in the final fully connected layer, we set the bias to zero.Then the data prediction procedure can be written as: The quantity w i is the calculated cosine similarity between the feature µ 2 and the weight parameter W i for class i.The loss function is given by: log( e ∥W j ∥∥µ 2 ∥ cos(θ j ) log( e cos(θ j ) where T is the number of training images and the quantity y j describes the cosine similarity towards its ground truth class for image j.

Incremental Test
By employing the incremental-frozen framework, we can reduce the storage requirements by only preserving the prototypes of all the encountered classes and updating the exemplar memory (EM) when introducing new classes.This way, we can effectively manage the limitations imposed by memory and computational capacities.Firstly, we utilize the robust hyperdimensional network and transferable knowledge network to calculate the prototypes P R and P T .Once we acquire the prototypes for the novel classes, we can promptly update the EM.It is important to note that the EM does not update the prototypes of the old classes, as RHD and TKN remain fixed in the subsequent session.Then, we save all the prototypes for the classes that have been appeared so far within the EM.Finally, we can derive the ultimate classification outcome by evaluating the similarity measure between the test sample and each prototype.Suppose we have a test sample q.According to Eq. 4, we can compute separate similarity S R and S T for each classifier RHD and TKN individually.Then, we can combine classifiers through weighted integration by considering both scores to obtain the final score S as: where λ ∈ [0, 1] is the hyperparameter.This approach allows us to leverage the strengths of multiple classifiers and intelligently merge their outputs, leading to a more accurate final result.
Table 1: Quantitative comparison on the test set of miniImageNet in the 5-way 5-shot FSCIL setting."Average Acc." is the average performance of all sessions."Final Improv."calculates the improvement of our method in the last session.† The results of [1] are obtained using its released code.

Experiments
In this section, we conduct quantitive comparisons between our TLCE and state-of-the-art few-shot classincremental learning methods on two representative datasets.We also perfrom ablation studies on evaluating design choices and different hyperparameters for our methods.

Datasets
We evaluate our proposed method on two datasets for benchmarking few-shot class-incremental learning: miniImageNet [7] and CIFAR100 [8].
In the miniImageNet [7] dataset, there are 100 classes, with each class having 500 training images and 100 testing images.As for CIFAR100 [8], it is a challenging dataset with 60,000 images of size 32 × 32, divided into 100 classes.Each class has 500 training images and 100 testing images.Following the split used in [32], we select 60 base classes and 40 novel classes from CIFAR100 and miniImageNet.These 40 novel classes are further divided into eight incremental sessions.In each session, we learn using a 5-way 5-shot approach, which means training on 5 classes with 5 images per class.

Implementation Details
For miniImageNet and CIFAR100, we use ResNet-12 following C-FSCIL [1].We train the TKN with the SGD optimizer, where the learning rate is 0.01, the batch size is set as 128 and epoch is 120.As for the RHD network, it is pretrained by the C-FSCIL [1] work.For each image in the dataset, we represent it as a 512dimensional feature extracor.The hyperparameter λ is set to 0.8 for both the miniImageNet and the CIFAR100 dataset.

Comparison and Evaluation
In order to evaluate the effectiveness of our TLCE, we first conduct quantitative comparisons with several rep-  Quantitative comparisons.As there are numerous efforts have been paid to the few-shot class-incremental learning, we mainly compare our TLCE with representative and SOTA works.The compared methods include CIL methods [27,28,46] and FSCIL methods [32,33,2,4,3,1].For C-FSCIL [1], we only compare with their basic version and do not take their model requiring additional training during incremental sessions into consideration .
For our method, we report we provide our best results with the value of λ set to 0.8.Table 1 and 2 show the quantitative comparison results on two datasets.It can be seen that our best results outperform the other methods.In particular, we consider the different transferable knowledge models.For the baseline, we train the model in the standard supervised classification.For TLCE, we integrate cosine metric with cross entropy to train the model .It can be seen that the latter one can significantly enhance the performance of the ensemble classifiers.
From Table 1 and 2, it can be deserved that C-FSCIL performs more effectively in the first five incremental sessions, while the effectiveness is slight in the last four incremental sessions.We make further analysis from the perspective of the accuracy on base and novel classes, respectively.According to the data shown in Figure 2, we can observe a slight decrease in the base performance.This indicates that C-FSCIL could resist the knowledge forgetting.However, the novel performance on the following incremental sessions is poor.In contrast, an ideal FSCIL classifier will have equally high performance on both novel and base classes.For our method TLCE, it is evident that while there is a decrease in the base classes, there is a significant improvement in the novel and weighted performance.In the ablation study, we perform more experiments and analysis of different λ values to reveal which degree of RHD and TKN is more suitable for the dataset.

Ablation Study
In this section, we perform ablation studies to verify the design choices of our method and the effectiveness of different modules.First, we conduct experiments on different hyperparameter λ to see how the RHD and TKN can affect the final results.Then, we perform the study on the effectiveness of different ensemble classifiers.
Effect on different hyperparameter λ.Different λ values correspond to different degrees of RHD and TKN applied to the input data.From the results in Table 3, it can be found when the TKN does not work (λ = 0.0), the result is lower.But with the ensemble of TKN, the result shows a convex curve with different λ.That indicates the importance of the TKN.
Effect on different ensemble classifiers.We conduct experiments on miniImageNet to verify the effectiveness of the ensemble classifiers.Specifically, we train the TKN in the standard supervised classifier as the baseline.The results in Table 4 show that the ensemble classifier can lead to better performance.Furthermore, we discover that integrating cosine metric with cross entropy can lead to further enhancement of model performance.Hence, we adopt the latter approach for TKN training in our classification.

Conclusion
In this paper, we propose a simple yet effective framework, named TLCE, for few-shot class-incremental learning.Without any retraining and expensive computation during incremental sessions, our transfer-learning based ensemble classifiers method can efficiently to further alleviate the issues of catastrophic forgetting and overfitting.Extensive experiments show that our method can outperform SOTA methods.Investigating a more transferable network is worthy to explore in the future.Also, exploring a more general way to combine the classifiers is an interesting future work.

Figure 1 :
Figure 1: An illustration of the proposed method pipeline.F is the backbone network and G is a projection layer.The RHD and TKN have a shared architecture.We obtained different network parameters by using various training methods and loss functions.In the incremental session, we freeze the RHD and TKN parameters.

Table 2 :
[1]ntitative comparison on the test set of CIFAR100 in the 5-way 5-shot FSCIL setting."AverageAcc." is the average performance of all sessions."FinalImprov."calculates the improvement of our method in the last session.†Theresults of[1]are obtained using its released code.

Table 3 :
The ablation study on value selection of hyperparameter λ.Accuracy (%) on the test set of miniImageNet and CIFAR100 in the last session are measured.