Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond

Automatic understanding of human affect using visual signals is of great importance in everyday human-machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute the most popular and effective affect representations. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network (CNN-RNN) layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal.


Introduction
Current research in automatic analysis of facial affect aims at developing systems, such as robots and virtual humans, that will interact with humans in a naturalistic way under real-world settings. To this end, such systems should automatically sense and interpret facial signals relevant to emotions, appraisals and intentions. Moreover, since real-world settings entail uncontrolled conditions, where subjects operate in a diversity of contexts and environments, systems that perform automatic analysis of human behavior should be robust to video recording conditions, the diversity of contexts and the timing of display. 1 For the past twenty years research in automatic analysis of facial behavior was mainly limited to posed behavior which was captured in highly controlled recording conditions [34,40,54,56]. Some representative datasets, which are  [27], are the Cohn-Kanade database [34,54], MMI database [40,56], Multi-PIE database [22] and the BU-3D and BU-4D databases [61,62].
Nevertheless, it is now accepted by the community that the facial expressions of naturalistic behaviors can be radically different from the posed ones [10,47,65]. Hence, efforts have been made in order to collect subjects displaying naturalistic behavior. Examples include the recently collected EmoPain [4] and UNBC-McMaster [35] databases for analysis of pain, the RU-FACS database of subjects participating in a false opinion scenario [5] and the SEMAINE corpus [38] which contains recordings of subjects interacting with a Sensitive Artificial Listener (SAL) in controlled conditions. All the above databases have been captured in wellcontrolled recording conditions and mainly under a strictly defined scenario eliciting pain.
Representing human emotions has been a basic topic of research in psychology. The most frequently used emotion representation is the categorical one, including the seven basic categories, i.e., Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral [14] [11]. It is, however, the dimensional emotion representation [60], [46] which is more appropriate to represent subtle, i.e., not only extreme, emotions appearing in everyday human computer interactions. To this end, the 2-D valence and arousal space is the most usual dimensional emotion representation. Figure 1 shows the 2-D Emotion Wheel [42], with valence ranging from very positive to very negative and arousal ranging from very active to very passive.
Currently, there are many challenges (competitions) in the behavior analysis domain. One such example is the Audio/Visual Emotion Challenges (AVEC) series [43,44,55,57,58] which started in 2011. The first challenge [48] (2011) used the SEMAINE database for classification purposes by binarizing its continuous values, while the second challenge [49] (2012) used the same database but with its original values. The last challenge (2017) [44] utilized the SEWA database. Before this and for two consecutive years (2015 [43], 2016 [55]) the RECOLA dataset was used.
However these databases have some of the below limitations, as shown in Table 1: (1) they contain data recorded in laboratory or controlled environments. (2) their diversity is limited due to the small total number of subjects they contain, the limited amount of head pose variations and present occlusion, the static background or uniform illumination (3) the total duration of their included videos is rather short  40 1 min controlled AFEW-VA [30] < 600 600 0.5 − 4 secs in-the-wild SAL [21] 4 24 25 mins controlled SEMAINE [38] 150 To tackle the aforementioned limitations, we collected the first, to the best of our knowledge, large scale captured in-the-wild database and annotated it in terms of valence and arousal. To do so, we capitalized on the abundance of data available in video-sharing websites, such as YouTube [63] 4 and selected videos that display the affective behavior of people, for example videos that display the behavior of people when watching a trailer, a movie, a disturbing clip, or reactions to pranks.
To this end we have collected 298 videos displaying reactions of 200 subjects, with a total video duration of more than 30 hours. This database has been annotated by 8 lay experts with regards to two continuous emotion dimensions, i.e. valence and arousal. We then organized the Aff-Wild Challenge 5 based on the Aff-Wild database [64], in conjunction with International Conference on Computer Vision & Pattern Recognition (CVPR) 2017. The participating teams submitted their results to the challenge, outperforming the provided baseline. However, as described later in this paper, the achieved performances were rather low.
For this reason, we capitalized on the Aff-Wild database to build CNN and CNN plus RNN architectures shown to achieve excellent performance on this database, outperforming all previous participants' performances. We have made extensive experimentations, testing structures for combining convolutional and recurrent neural networks and training them altogether as an end-to-end architecture. We have used a loss function that is based on the Concordance Correlation Coefficient (CCC), which we also compare it with the usual Mean Squared Error (MSE) criterion. Additionally, we appropriately fused, within the network structures, two types of inputs, the 2-D facial images -presented at the input of the end-to-end architecture -and the 2-D facial landmark positions -presented at the 1st fully connected layer of the architecture.
We have also investigated the use of the created CNN-RNN architecture for valence-arousal estimation in other datasets, focusing on the RECOLA and the AFEW-VA ones. Last but not least, taking into consideration the large in-thewild nature of this database, we show that our network can be also used for other emotion recognition tasks, such as classification of the universal expressions.
The only challenge, apart from last AVEC (2017) [44], using 'in-the-wild' data is the series of EmotiW [16][17][18][19][20]. It uses the AFEW dataset, whose samples come from movies, TV shows and series. To the best of our knowledge, this is the first time that a dimensional database and features extracted from it, are used as priors for categorical emotion recognition in-the-wild, exploiting the EmotiW Challenge dataset.
To summarize, there exist several databases for dimensional emotion recognition. However, they have limitations, mostly due to the fact that they are not captured in-the-wild (i.e., not in uncontrolled conditions). This urged us to create the benchmark Aff-Wild database and organize the Aff-Wild Challenge. The results acquired are presented later in full detail. We proceeded in conducting experiments and building CNN and CNN plus RNN architectures, including the AffWildNet, producing state-of-the-art results. Upon acceptance of this article, the AffWildNet's weights will be made publicly available.
The main contributions of the paper are the following: • It is the first time that a large in-the-wild database -with a big variety of: (1) emotional states, (2) rapid emotional changes, (3) ethnicities, (4) head poses, (5) illumination conditions and (6) occlusions -has been generated and used for emotion recognition. • An appropriate state-of-the-art deep neural network (DNN) (AffWildNet) has been developed, which is capable of learning to model all these phenomena. This has not been technically straightforward, as can be verified by comparing the AffWildNet's performance to the performances of other DNNs developed by other research groups which participated in the Aff-Wild Challenge. • It is shown that the AffWildNet has been capable of generalizing its knowledge in other emotion recognition datasets and contexts. By learning complex and emotionally rich features of the AffWild, the AffWildNet constitutes a robust prior for both dimensional and categorical emotion recognition. To the best of our knowledge, it is the first time that state-of-the-art performances are achieved in this way.
The rest of the paper is organized as follows. Section 2 presents the databases generated and used in the presented experiments. Section 3 describes the pre-processing and annotation methodologies that we used. Section 4 begins by describing the Aff-Wild Challenge that was organized, the baseline method, the methodologies of the participating teams and their results. It then presents the end-to-end DNNs which we developed and the best performing AffWildNet architecture. Finally experimental studies and results are presented, which illustrate the above developments. Section 5 describes how the AffWildNet can be used as a prior for other, both dimensional and categorical, emotion recognition problems yielding state-of-the-art results. Finally, Section 6 presents the conclusions and future work following the reported developments.

Existing Databases
We briefly present the RECOLA, AFEW, AFEW-VA databases used for emotion recognition and mention their limitations which lead to the creation of the Aff-Wild database. Table 2 summarizes these limitations, also showing the superior properties of Aff-Wild.

RECOLA Dataset
The REmote COLlaborative and Affective (RECOLA) database was introduced by Ringeval et al. [45] and it contains natural and spontaneous emotions in the continuous domain (arousal and valence). The corpus includes four modalities: audio, visual, electro-dermal activity and electro-cardiogram. It consists of 46 French speaking subjects being recorded for 9.5 h recordings in total. The recordings were annotated for 5 minutes each by 6 French-speaking annotators (three male, three female). The dataset is divided into three parts, namely, training (16 subjects), validation (15 subjects) and test (15 subjects), in such a way that the gender, age and mother tongue are stratified (i.e., balanced).
The main limitations of this dataset include the tightly controlled laboratory environment, as well as the small number of subjects. It should be also noted that it contains a moderate total number of frames.

The AFEW Dataset
The series of EmotiW challenges [16][17][18][19][20] make use of the data from the Acted Facial Expression In The Wild (AFEW) dataset [16]. This dataset is a dynamic temporal facial expressions data corpus consisting of close to real world scenes extracted from movies and reality TV shows. In total it contains 1809 videos. The whole dataset is split into three sets: training set (773 video clips), validation set (383 video clips) and test set (653 video clips). It should be emphasized that both training and validation sets are mainly composed of real movie records, however 114 out of 653 video clips in the test set are real TV clips, thus increasing the difficulty of the challenge. The number of subjects is more than 330, aged 1-77 years. The annotation is according to 7 facial expressions (Anger, Disgust, Fear, Happiness, Neutral, Sadness and Surprise) and is performed by three annotators. The EmotiW challenges focus on audiovisual classification of each clip into the seven basic emotion categories.
The limitations of the AFEW dataset include its small size (in terms of total number of frames) and its restriction to only seven emotion categories, some of which (fear, disgust, surprise) include a small number of samples.

The AFEW-VA Database
Very recently, a part of the AFEW dataset of the series of EmotiW challenges has been annotated in terms of valence and arousal, thus creating the so called AFEW-VA [30] database. In total, it contains 600 video clips that were extracted from feature films and simulate real-world conditions, i.e., occlusions, different illumination conditions and free movements from subjects. The videos range from short (around 10 frames) to longer clips (more than 120 frames). This database includes per-frame annotations of valence and arousal. In total, more than 30,000 frames were annotated for dimensional affect prediction of arousal and valence, using discrete values in the range of [−10, +10].
The database's limitations include its small size (in terms of total number of frames), the small number of annotators (only 2) and the use of discrete values for valence and arousal. It should be noted that the 2-D Emotion Wheel (Figure 1) is a continuous space. Therefore, using discrete only values for valence and arousal provides a rather coarse approximation of the behavior of persons in their everyday interactions. On the other hand, using continuous values can provide improved modeling of the expressiveness and richness of emotional states met in everyday human behaviors.

The Aff-Wild Database
We created a database consisting of 298 videos, with a total length of more than 30 hours. The aim was to collect spontaneous facial behaviors in arbitrary recording conditions. To this end, the videos were collected using the Youtube video sharing web-site. The main keyword that was used to retrieve the videos was "reaction". The database displays subjects reacting to a variety of stimuli, e.g. viewing an unexpected plot twist of a movie or series, a trailer of a highly anticipated movie, or tasting something hot or disgusting. The subjects display both positive or negative emotions (or combinations of them). In other cases, subjects display emotions while performing an activity (e.g., riding a rolling coaster). In some videos, subjects react on a practical joke, or on positive surprises (e.g., a gift). The videos contain subjects from different genders and ethnicities with high variations in head pose and lightning.
Most of the videos are in YUV 4:2:0 format, with some of them being in AVI format. Eight subjects have annotated the videos following a methodology similar to the one proposed in [12], in terms of valence and arousal. An online annotation procedure was used, according to which annotators were watching each video and provided their annotations through a joystick. Valence and arousal range continuously in [−1, +1]. All subjects present in each video have been annotated. The total number of subjects is 200, with 130 of them being male and 70 of them female. Table 3 shows the general attributes of the Aff-Wild database. Figure 2 shows some frames from the Aff-Wild database, with people from different ethnicities displaying various emotions, with different head poses and illumination conditions, as well as occlusions in the facial area.  Figure 3 shows an example of annotated valence and arousal values over a part of a video in the Aff-Wild, together with corresponding frames. This illustrates the inthe-wild nature of our database, namely, including many different emotional states, rapid emotional changes and occlusions in the facial areas. Figure 3 also shows the use of continuous values for valence and arousal annotation, which gives the ability to effectively model all these different phenomena. Figure 4 provides a histogram for the annotated values for valence and arousal in the generated database.

Data Pre-processing and Annotation
In this section we describe the pre-processing process of the Aff-Wild videos so as to perform face and facial landmark   detection. Then we present the annotation procedure including: (1) Creation of the annotation tool.
(2) Generation of guidelines for six experts to follow in order to perform the annotation. The detected faces and facial landmarks, as well as the generated annotations are publicly available with the Aff-Wild database.
Finally, we present a statistical analysis of the annotations created for each video, illustrating the consistency of annotations achieved by using the above procedure.

Aff-Wild video pre-processing
VirtualDub [32] was used first so as to trim the raw YouTube videos, mainly at their beginning and end-points, in order to remove useless content (e.g., advertisements). Then, we extracted a total of 1,180,000 video frames using the Menpo software [2]. In each frame, we detected the faces and generated corresponding bounding boxes, using the method described in [37]. Next, we extracted facial landmarks in all frames using the best performing method as indicated in [8].
During this process, we removed frames in which the bounding box or landmark detection failed. Failures occurred when either the bounding boxes, or landmarks, were wrongly detected, or were not detected at all. The former case was semi-automatically discovered by: (i) detecting significant shifts in the bounding box and landmark positions between consecutive frames and (ii) having the annotators verify the wrong detection in the frames.

Annotation tool
For data annotation, we developed our own application that builds on other existing ones, like Feeltrace [12] and Gtrace [13]. A time-continuous annotation is performed for each affective dimension, with the annotation process being as follows: (a) the user logs in to the application using an identifier (e.g. his/her name) and selects an appropriate joystick; (b) a scrolling list of all videos appears and the user selects a video to annotate; (c) a screen appears that shows the selected video and a slider of valence or arousal values ranging in [−1, 1]; (d) the user annotates the video by moving the joystick either up or down; (e) finally, a file is created including the annotation values and the corresponding time instances that the annotations are generated.
It should be mentioned that the time instances generated in the above step (e), did not generally match the video frame rate. To tackle this problem, we modified/re-sampled the annotation time instances using nearest neighbor interpolation. Figure 5 shows the graphical interface of our tool when annotating valence (the interface for arousal is similar); this corresponds to step (c) of the above described annotation process. It should also be added that the annotation tool has also the ability to show the inserted valence and arousal annotation while displaying a respective video. This is used for annotation verification in a post-processing step.

Annotation guidelines
Six experts were chosen to perform the annotation task. Each annotator was instructed orally and through a multi page document on the procedure to follow for the task. This document included a list of some well identified emotional cues for both arousal and valence, providing a common basis for the annotation task. On top of that the experts used their own appraisal of the subject's emotional state for creating the annotations. 6 Before starting the annotation of each video, the experts watched the whole video so as to know what to expect regarding the emotions being displayed in the video.

Annotation Post-processing
A post-processing annotation verification step was also performed. Every expert-annotator watched all videos for a second time in order to verify that the recorded annotations were in accordance with the shown emotions in the videos or change the annotations accordingly. In this way, a further validation of annotations was achieved.
After the annotations have been validated by the annotators, a final annotation selection step followed. Two new experts watched all videos and, for every video, selected the annotations (between two and four) which best described the displayed emotions. The mean of these selected annotations constitute the final Aff-Wild labels.
This step is significant for obtaining highly correlated annotations, as shown by the statistical analysis presented next.

Statistical Analysis of Annotations
In the following we provide a quantitative and rich statistical analysis of the achieved Aff-Wild labeling. At first, for each video, and independently for valence and arousal, we computed: (i) the inter-annotator correlations, i.e., the correlations of each one of the six annotators with all other annotators, which resulted in five correlation values per annotator; (ii) for each annotator, his/her average inter-annotator correlations, resulting in one value per annotator; the mean of those six average inter-annotator correlations value is denoted next as MAC-A; (iii) the average inter-annotator correlations, across only the selected annotators, as described in the previous subsection, resulting in one value per selected annotator; the mean of those 2-4 average inter-selected-annotator correlations values is denoted next as MAC-S. We then computed over all videos and independently for valence and arousal, the mean of MAC-A and the mean of MAC-S computed in (ii) and (iii) above. The mean MAC-A is 0.47 for valence and 0.46 for arousal, whilst the mean MAC-S for valence is 0.71 and for arousal 0.70. An example set of annotations is shown in Figure 6, in an effort to further clarify the obtained MAC-S values. It shows the four selected annotations in a video segment for valence and arousal, respectively, with MAC-S value of 0.70 (similar to the mean MAC-S value obtained over all Aff-Wild).
In addition, Figure 7 shows the cumulative distribution of MAC-S and MAC-A values over all Aff-Wild videos for valence ( Figure 7a) and arousal (Figure 7b). In each case, two curves are shown. Every point (x, y) on these curves has a y value showing the percentage of videos with a (i) MAC-S (red curve) or (ii) MAC-A (blue curve) value greater or equal to x; the latter denotes an average correlation in [0, 1]. It can be observed that the mean MAC-S value, corresponding to a value of 0.5 in the vertical axis, is 0.71 for valence and 0.70 for arousal. These plots also illustrate that the MAC-S values are much higher than the corresponding MAC-A values in both valence and arousal annotation, verifying the effectiveness of the annotation post-processing procedure.
Next, we conducted similar experiments for the valence/ arousal average annotations and the facial landmarks in each video, in order to evaluate the correlation of annotations to landmarks. To this end, we utilized Canonical Correlation Analysis (CCA) [23]. In particular, for each video and independently for valence and arousal, we computed the correlation between landmarks and the average of (i) all or (ii) selected annotations.

Developing the AffWildNet
This section begins by presenting the first Aff-Wild Challenge that was organized based on the Aff-Wild database and held in conjunction with CVPR 2017. It includes short descriptions and results of the algorithms of the six research groups that participated in the challenge. Although the results are promising, there is much room for improvement.
For this reason we developed our own CNN and CNN plus RNN architectures based on the Aff-Wild database. We propose the AffWildNet as the best performing among the developed architectures. Our developments and ablation studies are presented next.

The Aff-Wild Challenge
The training data (i.e., videos and annotations) of the Aff-Wild challenge were made publicly available on the 30th of January 2017, followed by the release of the test videos (without annotations). The participants were given the freedom to split the data into train and validation sets, as well as to use any other dataset. The maximum number of submitted entries for each participant was three. Table 4 summarizes the specific attributes (numbers of males, females, videos, frames) of the training and test sets of the challenge.
In total, ten different research groups downloaded the Aff-Wild database. Six of them made experiments and submitted their results to the workshop portal. Based on the performance they obtained on the test data, three of them were selected to present their results to the workshop. Two criteria were considered for evaluating the performance of the networks. The first one is Concordance Correlation Coefficient (CCC) [31], which is widely used in measuring the performance of dimensional emotion recognition methods, e.g., the series of AVEC challenges. CCC evaluates the agreement between two time series (e.g., all video annotations and predictions) by scaling their correlation coefficient with their mean square difference. In this way, predictions that are well correlated with the annotations but shifted in value are penalized in proportion to the deviation. CCC takes values in the range [−1, 1], where +1 indicates perfect concordance and −1 denotes perfect discordance. The highest the value of the CCC the better the fit between annotations and predictions, and therefore high values are desired. The mean value of CCC for valence and arousal estimation was adopted as the main evaluation criterion. CCC is defined as follows: where ρ xy is the Pearson Correlation Coefficient (Pearson CC), s x and s y are the variances of all video valence/arousal annotations and predicted values, respectively and s xy is the corresponding covariance value. The second criterion is the Mean Squared Error (MSE), which is defined as follows: where x and y are the (valence/arousal) annotations and predictions, respectively, and N is the total number of samples.
The MSE gives us a rough indication of how the derived emotion model is behaving, providing a simple comparative metric. A small value of MSE is desired.

Baseline Architecture
The baseline architecture for the challenge was based on the CNN-M [7] network, as a simple model that could be used to initiate the procedure. In particular, our network used the convolutional and pooling parts of CNN-M having been trained on the FaceValue dataset [3]. On top of that we added one 4096-fully connected layer and a 2-fully connected layer that provides the valence and arousal predictions. The interested reader can refer to Appendix A for a short description and the structure of this architecture. The input to the network were the facial images resized to resolution of 224 × 224 × 3, or 96 × 96 × 3, with the intensity values being normalized to the range [−1, 1].
In order to train the network, we utilized the Adam optimizer algorithm; the batch size was set to 80, and the initial learning rate was set to 0.001. Training was performed on a single GeForce GTX TITAN X GPU and the training time was about 4-5 days. The platform used for this implementation was Tensorflow [1].

Participating Teams' Algorithms
The three papers accepted to this challenge are briefly reported below, while Table 5 compares the acquired results (in terms of CCC and MSE) by all three methods and the baseline network. As one can see, FATAUVA-Net [6] has provided the best results in terms of the mean CCC and mean MSE for valence and arousal.
We should note that after the end of the challenge, more groups enquired about the Aff-Wild database and sent results for evaluation, but here we report only on the teams that participated in the challenge.
In the MM-Net method [33], a variation of a deep convolutional residual neural network (ResNet) [24] is first presented for affective level estimation of facial expressions. Then, multiple memory networks are used to model temporal relations between the video frames. Finally, ensemble models are used to combine the predictions of the multiple memory networks, showing that the latter steps improve the initially obtained performance, as far as MSE is concerned, by more than 10%.
In the FATAUVA-Net method [6], a deep learning framework is presented, in which a core layer, an attribute layer, an action unit (AU) layer and a valence-arousal layer are trained sequentially. The core layer is a series of convolutional layers, followed by the attribute layer which extracts facial features. These layers are applied to supervise the learning of AUs. Finally, AUs are employed as midlevel representations to estimate the intensity of valence and arousal.
In the DRC-Net method [36], three neural network-based methods which are based on Inception-ResNet [53] modules redesigned specifically for the task of facial affect estimation are presented and compared. These methods are: Shallow Inception-ResNet, Deep Inception-ResNet, and Inception-ResNet with Long Short Term Memory [25]. Facial features are extracted in different scales and both, the valence and arousal, are simultaneously estimated in each frame. Best results are obtained by the Deep Inception-ResNet method.
All participants applied deep learning methods to the problem of emotion analysis of the video inputs. The following conclusions can be drawn from the reported results. First, CCC of arousal predictions was really low for all three methods. Second, MSE of valence predictions was high for all three methods and CCC was low, except for the winning method. This illustrates the difficulty in recognizing emotion in-the-wild, where, for instance, illumination conditions differ, occlusions are present and different head poses are met.

Deep Neural Architectures & Ablation Studies
Here, we present our developments and ablation studies towards designing deep CNN and CNN plus RNN architectures for the Aff-Wild. We present the proposed architecture, AffWildNet, which is a CNN plus RNN network that produced the best results in the database.

The Roadmap
A. We considered two network settings: (1) a CNN network trained in an end-to-end manner, i.e., using raw intensity pixels, to produce 2-D predictions of valence and arousal, (2) a RNN stacked on top of the CNN to capture temporal information in the data, before predicting the affect dimensions; this was also trained in an end-toend manner. To extract features from the frames we experimented with three CNN architectures, namely, ResNet-50, VGG-Face [41] and VGG-16 [50]. To consider the contextual information in the data (RNN case) we experimented with both the Long Short-Term Memory (LSTM) and the Gated Recurrent Unit (GRU) [9] architectures. B. To further boost the performance of the networks, we also experimented with the use of facial landmarks. Here we should note that the facial landmarks are provided on-the-fly for training and testing the networks. The following two scenarios were tested: (1) The networks were applied directly on cropped facial video frames of the generated database. (2) The networks were trained on both the facial video frames as well as the facial landmarks corresponding to the same frame.
C. Since the main evaluation criterion of the Aff-Wild Challenge was the mean value of CCC for valence and arousal, our loss function was based on that criterion and was defined as: where ρ a and ρ v are the CCC for the arousal and valence, respectively.
D. In order to have a more balanced dataset for training, we performed data augmentation, mainly through oversampling by duplicating [39] some data from the Aff-Wild database. We copied small video parts showing less-populated valence and arousal values. In particular, we duplicated consecutive video frames that had negative valence and arousal values, as well as positive valence and negative arousal values. As a consequence, the training set consisted of about 43% of positive valence and arousal values, 24% of negative valence and positive arousal values, 19% of positive valence and negative arousal values and 14% of negative valence and arousal values. Our main target has been a trade-off between generating balanced emotion sets and avoiding to severely change the content of videos.

Developing CNN architectures for the Aff-Wild
For the CNN architectures, we considered the ResNet-50 and VGG-16 networks, pre-trained on the ImageNet [15] dataset that has been broadly used for state-of-the-art object detection. We also considered the VGG-Face network, pretrained for face recognition on the VGG-Face dataset [41]. The VGG-Face has proven to provide the best results, as reported next in the experimental section. It is worth mentioning that in our experiments we have trained those architectures for predicting both valence and arousal at their output, as well as for predicting valence and arousal separately.
The obtained results were similar in the two cases. In all experiments presented next, we focus on the simultaneous prediction of valence and arousal. The first architecture we utilized was the deep residual network (ResNet) of 50 layers [24], on top of which we stacked a 2-layer fully connected (FC) network. For the first FC layer, best results have been obtained when using 1500 units. For the second FC layer, 256 units provided the best results. An output layer with two linear units followed providing the valence and arousal predictions. The interested reader can refer to Appendix A for a short description and the structure of this architecture.
The other architecture that we utilized was based on the convolutional and pooling layers of VGG-Face or VGG-16 networks, on top of which we stacked a 2-layer FC network. For the first FC layer, best results have been obtained when using 4096 units. For the second FC layer, 2048 units provided the best results. An output layer followed, including two linear units, providing the valence and arousal predictions. The interested reader can refer to Appendix A for a short description and the structure of this architecture as well.
In the case when landmarks were used (scenario B.2 in subsection 4.2.1), these were input to the first FC layer along with: i) the outputs of the ResNet-50, or ii) the outputs of the last pooling layer of the VGG-Face/VGG-16. In this way, both outputs and landmarks were mapped to the same feature space before performing the prediction. With respect to parameter selection in those CNN architectures, we have used a batch size in the range 10−100 and a constant learning rate value in the range 0.0001 − 0.001. The best results have been obtained with batch size equal to 50 and learning rate equal to 0.001. The dropout probability value has been set to 0.5.

Developing CNN plus RNN architectures for the Aff-Wild
In order to consider the contextual information in the data, we developed a CNN-RNN architecture, in which the RNN part was fed with the outputs of either the first, or the second fully connected layer of the respective CNN networks.
The structure of the RNN, which we examined, consisted of one or two hidden layers, with 100 − 150 units, following either the LSTM neuron model with peephole connections, or the GRU neuron model. Using one fully connected layer in the CNN part and two hidden layers in the RNN part, including GRUs, has been found to provide the best results. An output layer followed, including two linear units, providing the valence and arousal predictions. Table 6 shows the configuration of the CNN-RNN architecture. The CNN part of this architecture was based on the convolutional and pooling layers of the CNN architectures described above (VGG-Face, or ResNet-50) that was followed by a fully connected layer. Note that in the case of scenario B.2 of subsection 4.2.1, both the outputs of the last pooling layer of the CNN, as well as the 68 landmark 2-D positions (68 × 2 values) were provided as inputs to this fully connected layer. Table 6 shows the respective number of units for the GRU and the fully connected layers. We call this CNN plus RNN architecture AffWildNet and illustrate it in Figure 9. Network evaluation has been performed by testing different parameter values. The parameters included: the batch size and sequence length used for network parameter updating, the value of the learning rate and the dropout probability value. Final selection of these parameters was similar to the CNN cases, apart from the sequence length which was selected in the range 50 − 200 and batch size that was selected in the range 2 − 10. Best results have been obtained with sequence length 80 and batch size 4. We note that all deep learning architectures have been implemented in the Tensorflow platform.

Experimental Results
In the following we present the affect recognition results obtained when applying the above derived CNN-only and CNN plus RNN architectures to the Aff-Wild database. At first, we have trained the VGG-Face network using two different annotations. One, which is provided in the Aff-Wild database, is the average of the selected (as described in subsection 3.4) annotations. The second is that of a single annotator (the one with the highest correlation to the landmarks). It should be mentioned that the latter is generally less smooth than the former, average, one. Hence, they are more difficult to be modeled. Then, we tested the two trained networks in two scenarios, as described in subsection 4.2.1 case B, using/not using the 68 2-D landmark inputs.
The results are summarized in Table 7. As was expected, better results were obtained when the mean of annotations was used. Moreover, Table 7 shows that there is a notable improvement in the performance, when we also used the 68 2-D landmark positions as input data.
Next, we examined the use of various numbers of hidden layers and hidden units per layer when training and testing the VGG-Face-GRU network. Some characteristic selections and their corresponding performances are shown in Table 8. It can be seen that the best results have been obtained when the RNN part of the network consisted of 2 layers, each of 128 hidden units. In Figures 10(a) and 10(b), we qualitatively illustrate some of the obtained results by comparing a segment of the obtained valence/arousal predictions to the ground truth values, in 10000 consecutive frames of test data.   The results shown in Table 9 and the above Figures verify the excellent performance of the AffWildNet. They also show that it greatly outperformed all methods submitted in the Aff-Wild Challenge.

Feature Learning from Aff-Wild
When it comes to dimensional emotion recognition, there exists great variability between different databases, especially those containing emotions in-the-wild. In particular, the annotators and the range of the annotations are different and the labels can be either discrete or continuous. To tackle the problems caused by this variability, we take advantage of the fact that the Aff-Wild is a powerful database that can be exploited for learning features, which may then be used as priors for dimensional emotion recognition. In the following, we show that it can be used as prior for the RECOLA and AFEW-VA databases that are annotated for valence and arousal, just like Aff-Wild. In addition to this, we use it as a prior for categorical emotion recognition, on the EmotiW dataset, which is annotated in terms of the seven basic emotions. Experiments have been conducted on these databases yielding state-of-the-art results and thus verifying the strength of Aff-Wild for affect recognition.

Experimental Results for the Aff-Wild and RECOLA database
In this subsection, we demonstrate the superiority of our database when it is used for pre-training a DNN. In particular, we fine-tune the AffWildNet on the RECOLA and for comparison purposes we also train on RECOLA an architecture comprised of a ResNet-50 and a 2-layer GRU stacked on top (let us call it ResNet-GRU network). Table 10 shows the results only for the CCC score as our minimization loss was depending on this metric. It is clear that the performance on both arousal and valence of the fine-tuned model on the Aff-Wild database is much higher than the performance of the ResNet-GRU model.
To further demonstrate the benefits of our model when predicting valence and arousal, we demonstrate a histogram in the 2-D valence & arousal space of the annotations (Figure 12(a)) and predictions of the fine-tuned AffWildNet (Figure 12(b)) for the whole test set of RECOLA.  Finally, we also illustrate in Figures 13(a) and 13(b) the network prediction and ground truth for one test video of RECOLA, for the valence and arousal dimensions, respectively.

Experimental Results for the AFEW-VA database
In this subsection, we focus on recognition of emotions in the AFEW-VA database, which annotation's is somewhat To tackle this problem, we scaled the range of the AFEW-VA labels to [−1, +1]. Moreover, differences were observed, due to the fact that the labels of the AFEW-VA are discrete, while the labels of the Aff-Wild are continuous. Figure 14 shows the discrete valence and arousal values of the annotations in AFEW-VA database, whereas Figure 15 shows the corresponding histogram in the 2-D valence & arousal space. We then performed fine-tuning of the AffWildNet to the AFEW-VA database and tested the performance of the generated network. Similarly to [30], we used a 5-fold personindependent cross-validation strategy. Table 11 shows a comparison of the performance of the fine-tuned AffWildNet with the best results reported in [30]. Those results are in terms of the Pearson CC. It can be easily seen that the finetuned AffWildNet greatly outperformed the best method reported in [30].
For comparison purposes, we also trained a CNN network on the AFEW-VA database. This network's architec-   ture was based on the convolution and pooling layers of VGG-Face followed by 2 fully connected layers with 4096 and 2048 hidden units, respectively. As shown in Table 12, the performance of the fine-tuned AffWildNet, in terms of CCC, greatly outperformed this network as well. All these verify that our network can be used as a pretrained one to yield excellent results across different dimensional databases.

Experimental Results for the EmotiW dataset
To further show the strength of the AffWildNet, we used the AffWildNet -which is trained for dimensional emotion recognition task -in a very different problem, that of categorical in-the-wild emotion recognition, focusing on the EmotiW 2017 Grand Challenge. To tackle categorical emotion recognition, we modified the AffWildNet's output layer to include 7 neurons (one for each basic emotion category) and performed fine-tuning on the AFEW 5.0 dataset.
In the presented experiments, we compare the fine-tuned AffWildNet's performance with that of other state-of-the-art CNN and CNN-RNN networks; the CNN part of which is based on the ResNet 50, VGG-16 and VGG-Face architectures, trained on the same AFEW 5.0 dataset. The accuracies of all networks on the validation set of the EmotiW 2017 Grand Challenge are shown in Table 13. A higher accuracy value indicates better performance for the model. We can easily see that the AffWildNet outperforms all those other networks in terms of total accuracy. We should note that: (i) the AffWildNet was trained to classify only video frames (and not audio) and then video classification based on frame aggregation was performed (ii) the cropped faces provided by the challenge were only used (and not our own detection and/or normalization procedure) (iii) no data-augmentation, post-processing of the results or ensemble methodology have been conducted.
It should also be mentioned that the fine-tuned AffWildNet's performance, in terms of total accuracy, is: (i) much higher than the baseline total accuracy of 0.3881 reported in [16] (ii) better than all vanilla architectures' performances that were reported by the three winning methods in the audio- The above are shown in Table 14. Those results verify that the AffWildNet can be appropriately fine-tuned and successfully used for dimensional, as well as for categorical emotion recognition.

Conclusions and Future Work
Deep learning and deep neural networks have been successfully used in the past years for facial expression and emotion recognition based on still image and video frame analysis. Recent research focuses on in-the-wild facial analysis and refers either to categorical emotion recognition, targeting recognition of the seven basic emotion categories, or to dimensional emotion recognition, analyzing the valencearousal (V-A) representation space.
In this paper, we introduce Aff-Wild, a new, large inthe-wild database that consists of 298 videos of 200 subjects, with a total length of more than 30 hours. We also present the Aff-Wild Challenge that was organized on Aff-Wild. We report the results of the challenge, and the pitfalls and challenges in terms of predicting valence and arousal inthe-wild. Furthermore, we design a deep convolutional and recurrent neural architecture and perform extensive experimentation with the Aff-Wild database. We show that the generated AffWildNet provides the best performance for valence and arousal estimation on the Aff-Wild dataset, both in terms of the Concordance Correlation Coefficient and the Mean Squared Error criteria, when compared with other deep learning networks trained on the same database.
Subsequently, we then demonstrate that the AffWildNet and the Aff-Wild database constitute tools that can be used for facial expression and emotion recognition on other datasets. Using appropriate fine-tuning and retraining methodologies, we show that best results can be obtained by applying the AffWildNet to other dimensional databases, such as the RE-COLA and the AFEW-VA ones and by comparing the obtained performances with other state-of-the-art pretrained and fine-tuned networks.
Furthermore, we observe that fine-tuning on the AffWild-Net can produce state-of-the-art performance, not only for dimensional, but also for categorical emotion recognition. We use this approach to tackle the facial expression and emotion recognition parts of the EmotiW 2017 Grand Challenge, referring to recognition of the seven basic emotion categories, finding that we produce comparable or better results to the winners of this contest.
It should be stressed that it is the first time, to the best of our knowledge, that the same deep architecture can be used for both types of dimensional and categorical emotion analysis. To achieve this, the AffWildNet has been effectively trained with the largest existing, in-the-wild, database for continuous valence-arousal recognition (regression analysis problem) and then used for tackling the discrete seven basic emotion recognition (classification) problem.
The proposed procedure for fine-tuning the AffWildNet can be applied to further extend its use in the analysis of other new visual emotion recognition datasets. This includes our current work on extending the Aff-Wild with new in-thewild audiovisual information, as well as using it as a means for unifying different approaches to facial expression and emotion recognition. These approaches contain dimensional emotion representations, basic and compound emotion categories, facial action unit representations, as well as specific emotion categories met in different contexts, such as negative emotions, emotions in games, in social groups and other human machine (or robot) interactions.

A.1 Baseline: CNN-M
The exact structure of the network is shown in Table 15. In total, it consists of 5 convolutional, batch normalization and pooling layers and 2 fully connected (FC) ones. For each convolutional layer the parameters are the filter and the stride, in the form of (filter height, filter width, input channels , output channels/feature maps) and (1, stride height, stride width , 1), respectively, and for the max pooling layer the parameters are the ksize and stride, in the form of (pooling height, pooling width, input channels, output channels) and (1, stride height, stride width , 1), respectively. We follow the TensorFlow's platform notation for the values of all those parameters. Note that the activation function in the convolutional and batch normalization layers is the ReLU one; this is also the case in the first FC layer. The activation function of the second FC layer, which is the output layer, is a linear one.

A.2 ResNet-50
Residual learning is adopted in these models by stacking multiple blocks of the form: where x k , W k and o k indicate the input, the weights, and the output of layer k, respectively. B indicates the residual function that is learnt and h is the identity mapping between the residual function and the input. The h identity mapping is a projection of x k to match the dimensions of B(x k , {W k }) (done by 1×1 convolutions), as in [24]. The first layer of the ResNet-50 model is comprised of a 7 × 7 convolutional layer with 64 feature maps, followed by a max pooling layer of size 3×3. Next, there are 4-bottleneck blocks, where a shortcut connection is added after each block. Each of these blocks is comprised of 3 convolutional layers of sizes 1 × 1, 3 × 3, and 1 × 1 with different number of feature maps.
The architecture of the network is depicted in Figure 16. Each convolutional layer is in the format: filter height × filter width, number of input feature maps, number of output feature maps.
A.3 VGG-Face/VGG-16 Table 16 shows the configuration of the CNN architecture based on VGG-Face or VGG-16. In total, it is composed of thirteen convolutional and pooling layers and three fully connected ones. For all those layers the form of the parameters is the same as described above in the baseline architecture. We follow the TensorFlow's platform notation for the values of all those parameters. The output number of units is also shown in the Table. A linear activation function was used in the last FC layer, providing the final estimates. All units in the remaining FC layers were equipped with the ReLU. Dropout has been added after the first FC layer in order to avoid over-fitting. The architecture of the network is depicted in Figure 17.  Fig. 16: The CNN-only architecture for valence and arousal estimation, based on ResNet-50 structure and including two fully connected layers (V and A stand for valence and arousal respectively). Each convolutional layer is in the format: filter height × filter width, number of input feature maps, number of output feature maps.