1 Introduction

Convolutional networks have become a dominant approach for visual object recognition [16, 30, 50, 52]. However, as convolutional neural networks (CNNs) are becoming increasingly deep, the vanishing gradient problem [16] poses significant challenges as input information can vanish passing through many layers before reaching the end.

When training a deep neural network, gradients can become very small during the backpropagation process, making it hard to optimise the parameters in the early stages of the network. Therefore, in the training phase the weights of the layers at the end of the network get updated quite rapidly while the early layers do not, leading to poor results. Activation function ‘ReLU’ and regularisation methods like dropouts were proposed to address this problem [11]. However, while these methods are important they do not solve the problem entirely. Huang et al. [19] found that as layers are added to a network, at some point its performance will start to decrease [19]. Recent work [16, 18, 53, 54] proposed different solutions such as skip connections [16], use of different sized filters in parallel [53, 54] and exhaustive concatenation between layers [18]. This goes some way to addressing the problem.

In this paper, we draw inspiration from the above networks [16,17,18, 53,54,55] and propose a novel network architecture that retains positive aspects of these approaches [16, 18] while overcoming some of their limitations. Figure 1 illustrates a single module layout of our proposed architecture where its unique connectivity is displayed. We show that ChoiceNet design allows good gradient and information flow through the network while using fewer parameters compared to other state-of-the-art schemes. We evaluate ChoiceNet on benchmark datasets (ImageNet [28], CIFAR10 [27], CIFAR 100 [27] and SVHN [40]) for image classification, 300W [47] for facial landmark localisation and CamVid dataset [22] for semantic segmentation. Our model performs well against existing networks [14, 16, 18, 53,54,55] on all these datasets, showing promising results when compared to the current state-of-the-art (Fig. 2).

Fig. 1
figure 1

A single module of ChoiceNet. The ‘+’ denotes skip connections and ‘C’ denotes channelwise concatenation

Fig. 2
figure 2

A breakdown of the ChoiceNet module of Fig. 1. Here, letters A to G denote unique information generated by one forward pass through the module

Fig. 3
figure 3

A single block of ChoiceNet containing three consecutive ChoiceNet modules (see Fig. 1). They are simply stacked one after another and densely connected like DenseNet [18]

2 Related works

Since the discovery of convolutional network, finding the ideal network architecture for a particular task has been a challenging area of research. The increased number of layers in modern architectures signifies the differences between different patterns of connectivity and revising older ideas.

ResNet ResNet [16] uses identity mapping as bypassing paths to improve over a typical CNN network [27].

A typical convolutional feed-forward network connects the \(l\mathrm{th}\) layer’s output to the \((l+1)\mathrm{th}\) layer’s input. It gives rise to a layer transition: \(x_l\) = \(H_l(x_{l-1})\). ResNet [16] adds an identity mapped connection, also referred as skip connection, that bypasses the transformation in between:

$$\begin{aligned} x_l = H_l(x_{l-1}) + x_{l-1} \end{aligned}.$$
(1)

This mechanism allows the network to flow gradients directly through the identity functions which results in faster training and better error propagation. However, in [18] it was argued that despite the benefits of using skip connections, there is a possibility that when a layer is connected by a skip connection it may disrupt the information flow of the network therefore degrading the performance of the network.

In [56], a wider version of ResNet was proposed where the authors showed that an increased number of filters in each layer could improve the overall performance with sufficient depth. FractalNet [29] also shows comparable improvement on similar benchmark datasets[27].

DenseNet As an alternative to ResNet, DenseNet proposed a different connectivity scheme. They allowed connections from each layer to all of its subsequent layers. Thus \(lth\) layer receives feature maps from all previous layers. Considering \(x_0,x_1 \ldots , x_{l-1}\) as input:

$$\begin{aligned} x_l = H_l([x_0,x_1 \ldots , x_{l-1}]) \end{aligned}$$
(2)

where \([x_0,x_1 \ldots , x_{l-1}]\) denotes the concatenation of feature maps produced from previous layers, respectively.

The network maximises information flow by connecting the convolutional layers channelwise instead of skipping connections. In this model, the layer l has l number of inputs consisting of all the feature maps of previous \(l-1\) layers. Thus on the \(lth\) layer, there are \(l(l+1)/2\) connections. DenseNet requires fewer parameters as there is no need to learn from redundant features maps. This allows the network to compete against ResNet using fewer parameters.

We propose an alternative connectivity that retains the advantages of the above architectures while reducing some of their limitations. Figure 1 illustrates the connectivity layout between each layer of a single module. Each block of ChoiceNet contains three modules and the total network is comprised of three blocks with pooling operations in the middle (see Fig. 3).

3 ChoiceNet

Consider a single image \(x_0\) that is passing through a CNN. The network has L layers, each with a nonlinear transformation \(H_l(.)\), where l is the index number of the layer. \(H_l(.)\) is a list of operations such as batch normalisation [20], Pooling [31], rectified linear units [39] or a convolutional operation. The output of lth layer is denoted as \(x_l\).

ChoiceNet We propose an alternative connectivity that retains the advantages of the above architectures while reducing some of their limitations. Figure 1 illustrates the connectivity layout between each layer of a single module. Each block of ChoiceNet contains three modules and the total network is comprised of three blocks with pooling operations in the middle (see Fig. 3).

Figure 2 shows a breakdown of each module. Letters A to G denote unique information generated by one forward pass through the model. B is generated by three consecutive \(3 \times 3\) convolutional operations, whereas A is the result of the same three convolutional operations but additionally connected by a skip connection. Following this pattern, we generate information represented by letters C, D, F and G. Letter E denotes the special case where no convolutional operation is done after the \(1 \times 1\) convolutional operation and it contains all the original information. This information is then concatenated with the others (i.e. A, B, etc.) at the final output.

Therefore, the final output contains information with and without skip connections from filters of size 3, 5 and 7 and also from the original input without any modification. Note that the \(1 \times 1\) convolutional operation at the start acts as a bottleneck to limit computational costs and all the convolutional operations are padded appropriately for the concatenation at the final stage. Kernel sizes of 3, 5 and 7 were chosen because these three sizes together give the best performance [54, 55]. Adding more kernel sizes such as a combination of 3, 5, 9 and 11 or 3, 7 and 11 increases the network size in parameters without much improvement in performance (Fig. 4).

Considering \(x_0, x_1 \ldots , x_{l-1}\) as input, our proposed connectivity is given by:

$$\begin{aligned}&x_l = H_l(x_{l-1}) + x_{l-1} \end{aligned}$$
(3)
$$\begin{aligned}&x_{l+1} = H_l([x_{l},x_{l-1}] ) + x_{l} \end{aligned}$$
(4)

where \([x_l, x_{l-1}]\) is concatenation of feature maps. The feature maps are first summed and then concatenated which resembles characteristics of ResNet and DenseNet, respectively.

Composite function Each of the composite functions consists of a convolution operation followed by a batch normalisation and ends with a rectified linear unit(ReLU) operation.

Pooling Pooling is an essential part of convolutional networks since Eqs. (1) and (2) are not viable when the feature maps are not of equal size. We divide the network into multiple blocks where each block contains same sized features. Instead of using either max pooling or average pooling, we use both pooling mechanisms and concatenate them before feeding it to the next layer (see Fig. 5).

Bottleneck layers The use of \(1\times 1\) convolutional operations (known as bottleneck layers) can reduce computational complexity without hurting the overall performance of a network [34]. We introduce a \(1\times 1\) convolutional operation at the start of each composite function (see Figs. 1 and 3).

Fig. 4
figure 4

The ChoiceNet consists of three ChoiceNet blocks where each block contains three ChoiceNet modules (see Fig. 3) and each ChoiceNet module is connected via feature maps and skip connections (see Fig. 1). After each block, there is a Max-pool and an Avg-Pool operation and their feature maps are concatenated for the next layer

Implementation details ChoiceNet has three blocks with equal number of modules inside. In each Choice operation (see Fig. 1), there are three \(3\times 3\), three \(5\times 5\) and three \(7\times 7\) convolutional operations. Each of the consecutive convolutional operations is connected via a skip connection (red line in Fig. 1). The feature maps are then concatenated so that both the outputs with the skip and without the skip connections are included (green and black lines in Fig. 1 before ‘C’). Finally, the original input feature map is also merged (blue line in Fig. 1) to produce the final output.

The idea behind having the skip (Letter A, Fig. 2) and the non-skip connections (Letter B, Fig. 2) output merged together is for enabling the network to choose between the two options for each filter size. We also merge the original input to this output (Letter E, Fig. 2) so that the network can choose a suitable depth for optimal performance. To allow the network further options, we use both Max and average pooling. Thus, each pooling layer contains both a Max-Pool and an Avg-Pool operation. The outputs of each pooling operation are merged before proceeding to the next layer.

4 Experiments

We evaluate our proposed ChoiceNet architecture on benchmark datasets (ImageNet [28], CIFAR10 [27], CIFAR 100 [27] and SVHN [40]) and compared it with other state-of-the-art architectures. We also evaluated it on state-of-the-art semantic segmentation dataset CamVid [22] and 300W [47] dataset for facial landmark localisation.

4.1 Datasets

4.1.1 CIFAR

The CIFAR dataset [27] is a collection of two datasets, CIFAR10 and CIFAR100. Each dataset consists of 50,000 training images and 10,000 test images with \(32\times 32\) pixels. The CIFAR10 dataset contains 10 class values and CIFAR100 dataset contains 100. In our experiment, we hold out 5000 images from the training set for validation and use the rest of the images for training. We choose the model with the highest accuracy on the validation set to test on the test set. We adopt standard data augmentation with training including horizontally flipping images, random cropping, shifting and normalising using channel mean and standard deviation. These augmentations were widely used in previous work [16, 19, 29, 32, 34, 43, 51, 52]. We also tested our model on the datasets without augmentation. In our final output in Table 1, we denote the original dataset as C10 and C100, and the augmented dataset as C10+ and C100+.

4.1.2 SVHN

The SVHN dataset contains images of Street View House Numbers with \(32\times 32\) pixels. There are 73,257 images in the training set and 26,032 on the test set. It also contains additional 531,131 images for training purposes. Like in previous work [16, 19, 29, 32, 43], we use all the training data with no augmentation and use 10% of the training images as a validation set. We select the model with the highest accuracy on the validation set and report the test error in Table 1.

4.1.3 CamVid

The CamVid dataset [12] is a dataset consisting of 12 classes and has been mostly used for the task of semantic segmentation in previous work [2, 10, 38]. The dataset contains a training set of 367 images, a validation set of 100 images and a test set of 233 images. The challenge is to do pixelwise classification of the input image and correctly identify the objects in the scene. The metric called IoU or ‘intersection over union’ is commonly used for this particular task [2, 7, 22].

4.1.4 ImageNet

The ILSVRC 2012 classification dataset [46] consists of 1.2 million images for training and 50,000 for validation with 1000 classes. We adopt the same data augmentation scheme for training images as in [18, 19] and apply a single-crop or 10-crop with size 224 \(\times \)224 at test time. Following [18], we report classification errors on the validation set.

4.1.5 300W

The 300W [47,48,49] dataset is a collection of multiple face datasets such as LFPW [3], HELEN [25], AFW [62] and XM2VTS [37]. This is a challenging dataset that has been widely used for benchmarking facial landmark localisation algorithms [41]. The images in the dataset contain faces and 68 local landmarks [8, 9] semi-automatically annotated [49].

4.2 Training

Each of the experiments was performed 5 times and during the training process we took the model with the best validation score and reported its performance on the test set.

4.2.1 Classification

All networks were trained using stochastic gradient descent (SGD) [5]. We avoid using other optimisers such as Adam [23] and RMSProp [15] to keep the comparisons as fair and simple as possible. On all three datasets, we used a training batch of 128. For the first 100 epochs, we used a learning rate of 0.001, for the next 100 epochs 0.0001, and then a rate of 0.00001 for the final 300 epochs.

We used weight decay of 0.0005 and Nesterov [45] momentum without dampening. We use a dropout layer after each ChoiceNet block with dropout rate at 0.2.

We use the learning parameters from [19] which was later used by [18] in order that the training environment is the same for every network.

Fig. 5
figure 5

The ChoiceNet consists of three ChoiceNet blocks where each block contains three ChoiceNet modules (see Fig. 3) and each ChoiceNet module is connected via feature maps and skip connections (see Fig. 1). After each block, there is a Max-pool and an Avg-Pool operation and their feature maps are concatenated for the next layer

4.2.2 Segmentation

For this task, we use the training procedure of U-Net [44] (Figure in supplementary materials S6) and we change the conv-blocks of U-Net with Res-Block (a block of the network that holds off the unique properties), Dense-Block and ChoiceNet-Module (Fig. 3). We use the Adam Optimiser with an initial learning rate of 0.001 which was reduced by a factor of 10 after each 100 epochs until the network converged. A weight decay of 0.0005 and Nesterov [45] momentum without dampening was used. For fair comparison, we kept the number of channels of Res-block and Dense-block unchanged as in the original article [18, 19] (Fig. 6).

Fig. 6
figure 6

Training procedure using U-Net [44]. Before each pooling operation, the features are stored and later concatenated when the feature maps are upsampled as indicated by the green arrows

4.2.3 Facial landmark prediction

For evaluation, we followed the protocol used in [13, 42] where the final test set consists of 689 images and is divided into two category such as common and challenging. The common subset has 554 images and the challenging set has the rest. We used L1 loss as it is more appropriate for this task [13] and we also used \(Wing-Loss\) [13] which is a robust loss function designed for facial landmark prediction.

Table 1 Error rates (100 - accuracy)% on CIFAR and SVHN datasets

5 Result analysis

5.1 CIFAR and SVHN

Accuracy Table 1 shows that the ChoiceNet depth 40 achieves the highest accuracy on all three datasets. The error rate on C10+ and C100+ is 4.0% and 17.5%, respectively, which is lower than error rates achieved by other state-of-the-art models. Our results on the original C10 and C100 (without augmentation) datasets are 2% lower than Wide ResNet and 5% lower than pre-activated ResNet. Our model ChoiceNet (\(d = 37\)) performs comparably well to DenseNet-BC with \(k = 24\) and \(k = 40\), whereas ChoiceNet (\(d=40\)) outperforms all other networks.

Parameter efficiency Table 1 shows that ChoiceNet needs fewer parameters to give similar or better performance compared to other state-of-the-art architectures. For instance, ChoiceNet with a depth of 30 has only 13 million parameters yet it performs comparably well to DenseNet-BC (\(k=24\)) which has 15.3 million parameters. Our best results were achieved by ChoiceNet (\(d = 40\)) with 23.4 million parameters compared to DenseNet-BC (\(k = 40\)) with 25.6 m, DenseNet (\(k = 24\)) with 27.2 m and Wide ResNet with 36.5 m parameters.

Over-fitting Deep learning architectures can often be prone to over-fitting however as ChoiceNet requires a smaller number of parameters, it is less likely to over-fit the training datasets. Its performance on the non-augmented datasets appears to support this claim.

Exploding gradient While training ChoiceNet we observed that it occasionally suffers from an exploding gradient problem. ResNet and DenseNet were both trained using stochastic gradient descend(SGD) and a learning rate of 0.1 that was later reduced to 0.01 and 0.001 after every 100 epochs. However, we had to start training our network using a learning rate of 0.001 because setting the rate any higher was causing gradients to explode. We also had to reduce the learning rate to 0.0001 and then to 0.00001 after each 50 epochs instead of 100 to prevent the problem from reoccurring (Table 2).

The problem of exploding gradients is easier to handle than that of vanishing gradients. We used a smaller learning rate at the start and L2 regularisers with dropout layers (\(p = 0.5\)) which addressed the problem.

Table 2 Error rates of Top1 and Top 5% on ImageNet dataset
Table 3 Error rates of Top1 and Top 5% on ImageNet dataset of ChoiceNet with only Maxpool, AvgPool and both of them together
Table 4 The mean IoU (m_IoU) of all the classes on the CamVid dataset (test set) where mean-IoU means the mean of IoUs of all the 12 classes
Table 5 A comparison in error between different network architectures on 300W dataset with L1 Loss
Table 6 A comparison in error between different network architectures on 300W dataset with Wing Loss

5.2 CamVid

We tested ChoiceNet on the CamVid dataset and compared it with other state-of-the-art networks [7, 7, 14, 21, 24, 33, 36, 57,58,59]. Mean IoU (m_IoU) scores are shown in Table 4.

Our network performs better than other neural network architectures, it was able to outperform DenseNet and ResNet both in terms of m_IoU score as well as in terms of parameter efficiency. Our ChoiceNet with 13 million parameters was able to perform better than networks almost twice its size.

5.3 300W

ChoiceNet was tested on 300W, a state-of-the-art facial landmark localisation dataset where the goal is to predict 68 landmarks on an individual’s face. The dataset has two test sets called ‘common’ and ‘challenging’ where the ‘common’ test set is known to have instances which are easier to predict (examples in supplementary materials S4). The ‘challenging’ test set has more challenging cases where the faces are occluded or not clearly visible (supplement S4). We also found out that due to the semi-automatic nature of annotation in some cases in the ‘challenging’ test set the ground truth is not very precise but our model predicted more accurately. We hypothesise that on occasion where our model gives precise predictions to imprecise test annotation this may have increased the error of our model since ground truth did not match with the prediction (see Fig.  7).

In recent articles such as [13], it has been suggested that L1 loss is more appropriate for facial landmark localisation task than L2 loss. Loss function such as Wing-Loss has been also been developed specifically for this particular task. We used both L1 and wing loss and found out that ChoiceNet performs favourably compared to other state-of-the-art CNN architecture as well as architectures purposefully designed to this particular task such as CNN6/7 [13].

6 Discussion

Model compactness As a result of the use of different filter sizes with feature concatenation and skip connections at every stage, feature maps learned by any layer in a block can be accessed by all subsequent layers. This extensive feature reuse throughout the network leads to a compact model.

Fig. 7
figure 7

A comparison between the ground truth of the ‘challenging’ test set (left) and ChoiceNet-40’s prediction (right) where it shows that our model’s prediction is sometimes more accurate than the semi-automatically annotated ground truth(GT) images but it also increases the error bar since it does not match with the GT

Feature reuse ChoiceNet uses different filter sizes with skip connections and channel concatenation in each module (see Fig.  1). The kernel size of 3, 5 and 7 were found optimal in [54, 55] compared to combinations such as 3, 5, 9 and 11 or 3, 7 and 11 because the other combinations make the network costly without much improvement in performance. In order to have a deeper and visual understanding of its operation, we took the weights of the first block (in ChoiceNet-30) and normalised them to the range [0, 1]. After normalising the weights we mapped them to two sections, weights under 0.4 as white and over 0.4 as coloured—see Fig. 8. We assumed that the weights less than 0.4 will have insignificant effect on the total performance. The figure shows that after the very first \(1\times 1\) convolution operation on the raw input, the conv operations with channel size 7 have more effect than size 3 and 5. In the second module, all the conv operations’ weights were under 0.4 which suggests that the model used either the feature maps of the earlier output by concatenation (red line between filter 5 and 7 of the middle module) or it used the skip connection (red line above filter 3 with highlighted ‘+’ sign). On one hand, this indicates that the skip connection or channel concatenation or both are working as they were suppose to but this also means that we still have many redundant parameters in the network. In the third module, it was found that filters 3 and 5 had weights over 0.4 which indicates that they possibly had some contribution in the network. We suspect that the selection of filter size 7 in the first module and 3 and 5 in the third module echoes the hypothesis of AlexNet [28] where they found bigger filter sizes work better at the beginning of the networks and smaller filters work better in the later stages. However, the chosen path inside the network is not the same for every dataset. The bottom figure (see Fig. 8) displays the network trained for C10 (without augmentation). The dissimilarities show that even though they were trained on different versions (with and without augmentation) of the same dataset, the augmentation indirectly made the inside of the network quite different. Since it is very difficult to predict how the network will respond to a dataset, we cannot pre-select a path before training for optimal performance therefore our design provides more choice within the network so that it can find the optimal path by itself.

Ablation Study 1 In Table 7, we provide an ablation study on the ChoiceNet-30. We use C10 and C10+ dataset for this purpose. We disable different parts of the network path such as A, B (see Fig. 2) and compare the performance with the original model. The column ‘Difference between ChoiceNet-30’ shows the increase in errors when a certain path is disabled thus imping that the higher the error rate without that complement the higher impact it has on the total performance. The table shows that on both C10 and C10+, the connection ‘E’ had the highest impact but all the other paths also had impacts as well which confirms that every path within the networks improves the network’s performance. The small difference also suggests that all the paths the contributing to the same level and no individual paths are dominating.

Fig. 8
figure 8

An inside look at ChoiceNet. The top figure shows the skeleton of ChoiceNet before training and the bottom figure shows a path way the model has chosen for C10* (middle) and C10 (bottom) dataset for best classification accuracy after training. The coloured boxes and lines are putting the most contribution

Ablation Study 2 Similarly, in Table 3, we show the effect of the usage of two types of pooling method with our architectural design. We find that for all three models the use of max pool gave advantage over avg pooling. ChoiceNet-40 achieved the lowest error rate among the pooling techniques individually however it was superseded by the same model when both pooling were used. This shows even though, in cases, avg pooling may not be as effective as maxpool, using them together leads to improved performance.

Table 7 An error rate ablation study on ChoiceNet-30 on C10 and C10+

In Table 4, we show the Mean Intersection over Union (m_IoU) on the CamVid dataset of some of the current state-of-the-art models. We used the U-Net training scheme and changed the basic convolutional operations with ResBlocks, DenseBlocks and ChoiceNet-module (see Fig. 1). While our network has fewer parameters compared to ResBlock and Denseblocks, it achieved a higher score. Note that even though our model achieved a good m_IoU score, it is not as good as some of the network architectures designed specifically for segmentation tasks [21, 24, 57,58,59]. Nevertheless, it performed well comparing to both ResBlock and Dense-block as well as some other general purpose convolutional neural networks [36]. Some outputs are displayed in ‘S1’ section of supplementary materials.

In Tables 5 and 6, we show the performance of different state-of-the-art neural networks on 300W dataset using L1 and Wing-Loss, respectively. We also include methods such as CNN 6/7 which is specifically designed for this purpose with its robust loss function (Wing-Loss). The tables show that, with both loss function our model performs the highest on the ‘full’ dataset. ChoiceNet also achieves the lowest error on the ‘Challenging’ test set which further demonstrates the superiority of the proposed architecture. Detailed table and graph are displayed in ‘S2’ and ‘S3’ section of supplementary materials. In Fig. 7, we also show that in some cases the network predicts more precisely that the ground truth which increases the error rate as it doesn’t match with the less precise ground truth (Table 7).

Our intuition is that the extra connections and paths in our method enable the network to learn from a large variety of feature maps. This also enables the network to backpropagate errors more efficiently (see also [16, 18]). We found that due to all the connections the network can be prone to exploding gradient and therefore needs a small learning rate to begin with. We also found by grid search that the network shows peak performance when the depth is between 30 and 40 layers and further increasing the layers appears to have little effect. We suspect that ChoiceNet plateaus at depth 30–40 although it is possible that it could be a local minima as we couldn’t train models with depth more than 60 layers due to resource limitation.

The performance on ImageNet dataset is displayed in Table 2. Our model with all three variation achieves lower top 1% score compared to other state-of-the-art neural network architectures like ResNet, DenseNet, Inception (v3/v4) and ChoiceNet-40 scores the lowest top 5% and top 1% error. This is a result of the unique connectivity design (see Fig. 2). Due to the usage of convolutional output with and without skip connection, using different kernel sizes, concatenating the original input per module via the connection ‘E’ of Fig. 2 and using two different pooling techniques together, it achieves this superior performance. Also as the architecture has many connections, therefore it can work with less channel outputs per convolution operation which makes it parameter efficient. This means given a number of parameters it achieves better performance than other methods.

7 Conclusion

In this paper, we introduced a powerful yet lightweight and efficient network, ChoiceNet, which encodes better spatial information from images by learning from its numerous elements such as skip connections, the use of different filter sizes, dense connectivity and including both Max and Avg pooling. ChoiceNet is a general purpose network with good generalisation abilities and can be used across a wide range of tasks including, but not limited to, classification, image segmentation, facial landmark localisation. Our network shows promising performance when compared to state-of-the-art techniques across different tasks such as semantic segmentation and object classification while being more efficient.