Merging of Neural Networks

We propose a simple scheme for merging two neural networks trained with different starting initialization into a single one with the same size as the original ones. We do this by carefully selecting channels from each input network. Our procedure might be used as a ﬁnalization step after one tries multiple starting seeds to avoid an unlucky one. We also show that training two networks and merging them leads to better performance than training a single network for an extended period of time.


Introduction
Typical neural network training starts with random initialization and is trained until reaching convergence in some local optima.The final result is quite sensitive to the starting random seed as reported in [1,2], who observed 0.5% difference in accuracy between worst and best seed on Imagenet dataset and 1.8% difference on CIFAR-10 dataset.Thus, one might need to run an experiment several times to avoid hitting the unlucky seed.The final selected network is just the one with the best validation accuracy.
We believe that discrepancy between starting seeds performance can be explained by selecting slightly different features in hidden layers in each initialization.One might ask a question, can we somehow select better features for network training?One approach is to train a bigger network and then select the most important channels via channel pruning [3,4,5,6].Training a big network, which is subsequently pruned, might be in many cases prohibitive, since increasing network width by a factor of two results in a four times increase in FLOPs and also might require a change in some hyperparameters (e.g.regularization, learning rate).
Here, we propose an alternative approach demonstrated in Fig. 1 .Instead of training a bigger network and pruning it, we will train two same sized networks and merge them together into one.The idea is that each training run would fall into different local optima and thus have different sets of filters in each layer 1 Availability: https://github.com/fmfi-compbio/neural-network-mergingas shown in Fig. 2. We then can select a better set of filters than in original networks and achieve better accuracy.In summary, in this paper: • We propose a procedure for merging two networks with the same architecture into one with same architecture as original ones.
• We demonstrate that our procedure produces a network with better performance than the best of original ones.On top of that, we also show that the resulting network is better than the same network trained for an extended number of epochs (matching the whole training budget for the merged network).

Related work
There are multiple approaches, which try to improve accuracy/size tradeoff for neural networks without the need for specialized sparse computation (such as in case of weight pruning).Most notable one is channel pruning [3,4,5,6].
Here we first train a bigger network and then select the most important channels in each layer.Selection process usually involves assigning some score to each channel and then removing channels with the lowest score.
Another approach is knowledge distillation [7].This involves first training a bigger network (teacher) and then using its outputs as targets for a smaller network (student).It is hypothesized that by using larger network outputs, the smaller network can also learn hidden aspects of data, which are not visible in the dataset labels.However, it was shown that successful knowledge distillation requires training for a huge number of epochs (i.e.1200) [8].A slight twist to distillation was applied in [9] where bigger and smaller networks were cotrained together.
One can also use auxiliary losses to reduce redundancy and increase diversity between various places in the neural network [10].

Methods
Here, we describe our training and merging procedure.We will denote two networks, which will be merged as teachers and the resulting network as a student.
Our training strategy is composed of three stages: 1. Training of two teachers 2. Merging procedure, i.e. creating a student, which consists of the following substeps: Training of teachers and fine-tuning of the student is just standard training of a neural network by backpropagation.Below, we describe how we derive a student from two teachers.

Layerwise concatenation of teachers into a big student
First, we create a "big" student by layerwise concatenation of teachers.The big student simulates the two teachers and averages their predictions in the final layer.This phase is just network transformation without any training, see Fig. 3. Concatenation of the convolutional layer is done in channel dimension, see Fig. 4. Concatenation of the linear layer is done analogically in the feature dimension.We call the model "big student" because it has a doubled width.

Learning importance of big student neurons
We want the big student to learn to use only half of the neurons in every layer.So after the removal of unimportant neurons, we will end up with the original architecture.Besides learning the relevance of neurons, we also want the two computational flows to interconnect.
There are multiple ways to find the most relevant channels.One can assign scores to individual channels [3,4], or one can use an auxiliary loss to guide the network to select the most relevant channels.We have chosen the latter approach, inspired by [11].It leverages the L0 loss presented in [12].
Let be a linear layer with k input features.Let g i be gate assigned to feature f i .Gate can be either opened g i = 1 (student is using the feature) or closed g i = 0 (student isn't using the feature).Before computing outputs of the layer, we first multiply inputs by gates, i. e. instead of computing W f + b, we compute W (f • g) + b.To make our model use only half of the features we want 2 .The problem with this approach is that g i is discrete and is not trainable by the gradient descent.To overcome this issue, we used stochastic gates and continuous relaxation of L 0 norm presented in [12].The stochastic gates contain random variable that has nonzero probability to be 0: P [g i = 0] > 0, nonzero probability to be 1 P [g i = 1] > 0, and is continuous on interval (0, 1).The reparameterization trick makes the distribution of the gates trainable by gradient descent.
To encourage the big student to use only half of the features of the layer, we use an auxiliary loss: Note that our loss is different from the loss used in [12].Whereas our loss forces the model to have exactly half of the gates opened, their loss pushes the model to use as few gates as possible.
Thus we are optimizing L = L E + λ L half , where L E is error loss measuring fit on the dataset and new hyperparameter λ is proportion of importance of error loss and auxiliary loss.Hyperparameter λ is sensitive and needs proper tuning.At the beginning of the training, it can not be too big, or the student will set every gate to be closed with a probability of 0.5.At the end of the training, it can not be too small, or the student will ignore the auxiliary loss in favor of the error loss.It will use more than half of the neurons of the layer and will significantly drop performance after the compression.We found that using the quadratic increase of λ during big student's training works sufficiently well, see Fig. 5.
We have implemented gates in a separate layer.We have used two designs of gate layers, one for 2d channels and one for 1d data.The position of gate layers is critical.For example, if a gate layer is positioned right before the batch norm, its effect (i. e. multiplying the channel by 0.1) would be countered by the batch norm, see Fig.   Two of the ResNet gate layers have to be identical.If the layers would not be linked and for some channel i, the first gate would be closed while the second gate would be opened, the result of the second block, for that channel, would be 0 + f (x) instead of x i + f (x) which would defeat the whole purpose of ResNet and skip connections.
prior gate layer following gate layer Prior and following gate layers decide which neurons are important.On the right side is a compressed layer.It consists of only the important neurons.

Compression of big student
After learning of importance is finished, we select half of the most important neurons for every layer.Then, we compress each layer by keeping only the selected neurons as is visualized in Fig. 7.

Experimental results
We compare our merging strategy to generic neural network training on several problems.First, we test our training strategy on a synthetic problem.We show that our merging strategy can learn better features than typical training.Then we test various architectures on image classification problems.We show that after merging procedure the resulting network is better than original ones and also better than training one network for extended amount of time.

Training strategies
We compare our network merging with generic training strategies, which the use same total number of training epochs.Except for Imagenet, we are comparing our training strategy with strategies bo3 model and one model.
In our merging strategy, student, we use two-thirds of epochs to train teachers and one-third to train student (one-sixth to find important neurons and onesixth to fine-tune).
In the bo3 model we train three models, each for one-third of epochs, and then we choose the best.
In the final strategy, one model, we use all epochs to train one model.Note that each strategy uses a similar training budget.Also during inference all models use equivalent amount of resources, since they have exactly the same architecture.

Synthetic dataset -Sine problem
First, we want to confirm the idea that a network trained from random initialization might end in suboptimal local optima and our merging procedure finds higher quality local optima.To verify, we have created a synthetic dataset -five sine waves with noise.The input is scalar x and the target is y = sin(10πx) + z where z ∼ N (0, 0.2), see Fig. 8.
Our architecture is composed of two linear layers (i.e. one hidden layer) with 100 hidden neurons (Fig. 8).In every strategy, we have used 900 epochs and SGD with starting learning rate 0.01 and momentum 0.9.Then we have decreased the learning rate to 0.001 after the 100th epoch for student finetuning, the 250th epoch for teachers and bo3 models, and the 800th epoch for a model in one model.We repeat all experiments 50 times.We can observe, that our strategy has significantly smaller error than other strategies (Table .1, Fig. 9).
Digging deeper (Fig. 10), we observe networks trained by our strategy to predict all the peaks correctly.However, networks trained by generic strategy often miss some peaks.This indicates that our training strategy helps the network to select better features for later use.In some cases, bo1 network is lucky and predicts all the peaks correctly, which can be seen in having similar minimal error as our training strategy.
In all cases, our training strategy with merging provides better results than generic training strategies.Results are sumarized in Tables 2 and 3

Imagewoof on LeNet
LeNet is composed of two convolutional layers followed by three linear layers.The shape of an input image is (28, 28, 3).The convolutional layers have 6 and 16 output channels, respectively.The linear layers have 400, 120, and 80 input features, respectively.For the architecture of the big student see Fig. 6.
Every strategy has used 6000 epochs cumulatively and SGD with starting learning rate 0.01 and momentum 0.9.Every training except finding important neurons (teachers, student fine-tuning, bo3 models, and one model) decreased the learning to 0.001 in the third quarter and 0.0001 in the last quarter of the training.
We have conducted 10 experiments, see Fig. 11 for visualisation and Table 2 for detailed statistics.Our strategy has consistently better results than other strategies.It has a greater sample variance than one model as a consequence of an outlier, see Fig. 11.

Imagewoof on ResNet18
ResNet has two information flows (one through blocks, one through skip connections).Throughout the computation, its update is x = f (x) + x, instead of the original x = f (x).To conserve this property, some gate layers have to be synchronized -share weights and realization of random variables, see 6.
Every strategy has used 600 epochs cumulatively.The optimizer and the learning rate scheduler is analogical to the LeNet experiment.
We have conducted 5 experiments, see Fig. 12 for visualisation and table 2 for detailed statistics.Similarly, as with LeNet, our strategy has consistently better results than other strategies.

CIFAR-100 on ResNet20
We also tested our approach on CIFAR-100 dataset using ResNet20.Our total training budget is 900 epochs.We optimize models using SGD with starting learning rate 0.1 and then divide it by 10 during half and three quarters of the training of one network.We run all strategies 5 times and report results in Table 2.We can see, that our strategy is more than 1% better than training one model for extended period of time.We tested our merging approach also on Imagenet-1k dataset [18].However as seen in [2] high quality training requires 300 to 600 epoch, which is quite prohibitive.We opted for approach from Torchvision [19], which achives decent results in 90 epochs.We train networks using SGD with starting learning rate 0.1, which decreases by factor of 10 in third and two thirds of training.For final finetuning of student we used slightly smaller starting learning rate of 0.07.
For merging we used slightly different appoach than in previous experiments.We trained teachers only for short amount of 20 epochs, which gives teacher accuracy around 65%. Then we spend 20 epochs in tuning big student and finding important neurons and finally finetune for 90 epochs.In total of 150 epochs, we get better results than ordinary training for 90 epochs and also better result than training for equivalent amount of 150 epochs.Results are summarized in Tab. 3.

Conclusions and future work
We proposed a simple scheme for merging two neural networks trained from different initializations into one.Our scheme can be used as a finalization step after one trains multiple copies of one network with varying starting seeds.Alternatively, we can use our scheme for getting higher quality networks under a similar training budget which we experimentally demonstrated.
One of the downsides of our scheme is that during selection of important neurons we need to instantiate rather big neural network.In future, we would like to optimize this step to be more resource efficient.One option is to select important neurons in layerwise fashion.
Other possible options for future research include merging more than two networks and also merging networks pretrained on different datasets.

Acknowledgements
This research was supported by grant 1/0538/22 from Slovak research grant agency VEGA.

Figure 1 :
Figure 1: Comparison between a) training a bigger network and then pruning and b) training two separate networks and then merging them together.Width of rectangles denotes the number of channels in the layer.

Figure 2 :
Figure 2: Set of filters in the first layer of two ResNet20 networks trained on CIFAR-100 dataset with different starting seed.Each row shows filters from one network.Selected filters for merged network are marked with red outline.

( a )
Layerwise concatenation of teachers into a big student (b) Learning importance of big student neurons (c) Compression of big student 3. Fine-tuning of the student

Figure 3 :
Figure 3: Concatenation of linear layer.Orange and green weights are copies of the teacher's weights.Gray weights are initialized to zero.In the beginning, big student simulates two separate computational flows.But during the training, they can be interconnected.
d e f merge conv ( conv1 : nn .Conv2d , conv2 : nn .Conv2d ) −> nn .Conv2d : i n c h a n n e l s = conv1 .i n c h a n n e l s o u t c h a n n e l s = conv1 .o u t c h a n n e l s conv = nn .Conv2d ( i n c h a n n e l s * 2 , o u t c h a n n e l s * 2 , k e r n e l s i z e=conv1 .k e r n e l s i z e , s t r i d e=conv1 .s t r i d e , padding=conv1 .padding , b i a s=F a l s e ) conv .w e i g h t .data * = 0 conv .w e i g h t .data [ : o u t c h a n n e l s , : i n c h a n n e l s ] = \ conv1 .w e i g h t .data .d e t a c h ( ) .c l o n e ( ) conv .w e i g h t .data [ o u t c h a n n e l s : , i n c h a n n e l s : ] = \ conv2 .w e i g h t .data .d e t a c h ( ) .c l o n e ( ) r e t u r n conv

Figure 4 :
Figure 4: Pytorch code for concatenation of a convolutional layer in ResNet.Since convolutions are followed by Batch normalization, they do not use biases.

Figure 6 :
Figure 6: Positions of gate layers a) sine problem b) LeNet c) two consecutive blocks in ResNet.Two of the ResNet gate layers have to be identical.If the layers would not be linked and for some channel i, the first gate would be closed while the second gate would be opened, the result of the second block, for that channel, would be 0 + f (x) instead of x i + f (x) which would defeat the whole purpose of ResNet and skip connections.

Figure 7 :
Figure 7: Compression of the big student.On the left side is a linear layer of a big student.Prior and following gate layers decide which neurons are important.On the right side is a compressed layer.It consists of only the important neurons.

Figure 8 :Figure 9 :
Figure 8: a) Training dataset for sine problem consisting of 10000 samples where x ∼ U (0, 1) and y = sin(10πx) + z ; z ∼ N (0, 0.2).b) Architecture of model for sine problem and details about training setup are provided below.

Figure 10 :
Figure10: Plots of learned sine curves by models trained with different strategies.We plot every train result as one line and overlay on top of each other.As we can see, all of the students resulting from merging get all of the peaks.However, models without merging often missed some peaks.

Figure 11 :
Figure 11: Box plot of the testing accuracies of 10 experiments on Imagewoof with LeNet.

Figure 12 :
Figure 12: Results of 5 experiments on Imagewoof with ResNet18.The worst student (0.819) had slightly better accuracy than the best long teacher (0.818).

Table 1 :
Summary of experimental results for sine problem.We report square loss for each strategy.Strategy student (our strategy) uses 2/3 to train two teachers and 1/3 to train student (1/6 finding important features, 1/6 finetuning).Strategy bo3 model trains three models and picks the best.Strategy one model uses all epochs to train one model.