Four right-handed volunteers, two men and two women, between ages and 33 years of age, were photographed while posing the six Ekman’s universal emotions (Ekman, 1993): fear, anger, happiness, surprise, sadness, and disgust. In addition, a picture of a neutral expression was taken, where subjects had to relax and look straight ahead (Moreno, Borod, Welkowitz, & Alpert, 1990).
All the facial emotional expressions were captured using a full HD digital camera (Sony Alpha 7 II ILCE-7M2K®) positioned on a tripod and centrally placed in front of the subject at a distance of about 2 m. Before being portrayed, subjects were informed about the aim of the study and the procedure to be followed. They had to avoid make-up (except mascara) and to take off glasses, piercings, and earrings that could be used by dogs as a cue to discriminate the different expressions. Furthermore, an experimenter showed them a picture of the emotional facial expressions used by Schmidt and Cohn (2001), as a general reference for the expressive characteristics required. Subjects were then asked upon oral command to pose the different emotional facial expressions with the greatest intensity as possible. The order of the oral command was randomly assigned.
All the photographs were edited using Adobe Photoshop to homogenize the size of the stimuli and to add a uniform black background. Each face was cut along the vertical midline bisecting the right and the left hemiface, following the procedure described in Moreno et al. (1990). A composite photograph (mirrored chimeric picture) was then created for each of the two pictures, consisting of the original and its mirror-reversed hemiface photograph (a right-right (R-R) or left-left (L-L) hemifaces chimeric picture). As a result, two different pictures per each emotion were obtained, representing respectively the left and right hemiface expression of the same emotion (see Fig. 1). A Sencore ColorPro 5 colorimeter sensor and Sencore ColorPro 6000 software were used to calibrate the colours of the monitor to CIE Standard Illuminant D65 and to equalize pictures’ brightness.
All the 56 visual stimuli (due pictures × seven emotions × four subjects) were then presented to four women and four men, between 23 and 62 years of age, in order to select the most significant ones. The pictures were shown as a PowerPoint slideshow in full screen mode on a monitor (Asus VG248QE®) and in a random order between subjects. Each volunteer seated in front of the screen and had to rate on a 6-point scale (ranging between 0 and 5) the intensity of neutral, happiness, disgust, fear, anger, surprise, and sadness perceived per each facial expression shown. According to the questionnaire results, the pictures of a man and a woman were selected for the final test. (see Fig. 1).
Twenty-six domestic dogs of various breeds were recruited for this research. To be involved in the study, subjects had to satisfy several criteria: They had to live in households, to be food motivated, and not to be affected by chronic diseases. In addition, a Veterinary Behaviourist of the Department of Veterinary Medicine had to certify their health and the absence of any ocular and behavioural pathologies. Subjects had to fast for at least 8 hours before the testing session. We excluded five subjects: three dogs did not respond to any visual stimuli (i.e. did not stop feeding behaviour), and two dogs were influenced by the owner during the experiment. Hence, the final sample consisted of 21 subjects, 12 males (three neutered) and nine females (six neutered) whose ages ranged from 1 to 13 years (M = 3.90, SD = 2.83).
The experiment was carry out in an isolated and dark room of the Department of Veterinary Medicine, University of Bari. A lamp was used to illuminate the room artificially and uniformly, to avoid that any light reflections on the screen could interfere with dogs perception of the visual stimuli. Two monitors (Asus VG248QE®, 24-in. FHD, 1920 × 1080; Brightness(Max): 350 cd/m2) connected to a computer by an HDMI splitter were used to display simultaneously the visual stimuli. They were positioned on the two sides of a bowl containing dogs’ favourite food, at a distance of 1,90 m and aligned with it (see Fig. 2).
In addition, two plastic panels (10-cm high, 50-cm in depth) were located on the two side of the bowl at a distance of 30 cm, to ensure dogs’ central position during the test. Furthermore, two cameras, one recording in the standard mode and the other in night mode, were used to record the dog’s behaviour during trials. They were positioned on tripods in front of the subject, at a distance of about 3 m and 3.50 m and at a high of 1.30 m and 2 m, respectively (see Fig. 2).
Participants were randomly divided in two groups according to the gender of the presented human faces, so that each subject was presented with only female or male pictures. The test consisted in 2 weekly trials in which a maximum of two different emotional faces dyads were shown per each dog until the full set of stimuli was completed (i.e. each subject was presented with all the seven emotional faces).
The right-right (R-R) or left-left (L-L) hemifaces chimeric pictures of the same emotion were randomly assigned to each trial (and counterbalanced considering the whole sample), as well as the order of the emotional faces displayed.
Once in the testing room, the owner led the dog to the bowl on a loose leash, helping it to take a central position in the testing apparatus and waited till he started to feed. Then, he let the dog off the leash and positioned himself 2.5 m behind it. During the test, the owner had to maintain this position, looking straight to the wall in front of him and avoiding any interactions with the dog. After 10 seconds from the owner positioning, the first emotional face was displayed. Visual stimuli appeared simultaneously on the two screens, where they remain for 4 seconds. The chimeric pictures of the different emotions were presented in the middle of the screen. The interstimulus interval was at least 7 seconds, but if a subject did not resume feeding within this time, the following stimulus presentation was postponed. The maximum time allowed to resume feeding was 5 minutes. Visual stimuli were presented as a PowerPoint slideshow in which the first, the last, and in between stimuli slides were homogeneous black. All the seven emotional face dyads were displayed only once per each dog since it was registered a high level of habituation to the stimuli during the pilot test.
Two experimenters controlled the stimuli presentation from an adjacent room with the same system described in Siniscalchi et al. (2018).
Lateral asymmetries in the head turning response were considered since they represent an indirect parameter of the main involvement of the hemisphere contralateral to the side of the turn in processing the stimulus (Siniscalchi et al., 2010). Three different responses were evaluated: turn right, turn left, and no response, when a subject did not turn its head within 6 seconds from the picture appearance. The asymmetrical response was computed attributing a score of 1.0 for left head turning responses, −1.0 for the head turning to the right side or zero in the event of no turns of the head.
Dogs’ behaviours were video recorded continuously throughout the experiment. A total of 26 behaviours were considered, belonging to the stress behavioural category (Handelman, 2012): ears held in tension, slightly spatulate tongue, tongue way out, braced legs, tail down-tucked, panting, salivating, look away of avoidance, flattened ears, head lowered, paw lifted, lowering of the body posture, vocalization, whining, shaking of the body, running away, hiding, freezing, lips licking, yawning, splitting, blinking, seeking attention from the owner, sniffing on the ground, turn away, and height seeking posture.
Two trained observers, analysed the video footages and allocated a score of 1 per each behaviour shown. The interobserver reliability was assessed by means of independent parallel coding of videotaped sessions and calculated as percentage agreement; percentage agreement was always more than 91%. Furthermore, the latency time needed to turn the head toward the stimuli (i.e. reactivity) and to resume feeding from the bowl after the pictures appearance were computed.
The heart rate response to the stimuli presentation was evaluated following the procedures and the analysis previously described in Siniscalchi et al. (2016) and Siniscalchi et al. (2018). The PC-Vetgard+tm Multiparameter wireless system, to which dogs were previously accustomed, was used to record continuously the cardiac activity during the test. The heart rate response was analysed from the pictures appearance for at least the following 10 seconds or till the dog resumed to feed (maximum time allowed was 5 minutes). For the analysis, a heart rate curve was obtained during a pre-test in order to calculate the heart rate basal average (HR baseline). The highest (HV) and lowest (LV) values of the heart rate registered during the test were scored. Moreover, the area delimited by the HR curve and the baseline was computed for each dog and each visual stimulus using Microsoft Excel®. The area under the curve (above baseline and under curve; AUC) and the area above the curve (under baseline and above curve; AAC) values were calculated as number of pixels employing Adobe Photoshop. HR changes for each dog during presentations of different emotional faces were then analysed by comparing different area values with the corresponding baseline.
Given that data for percentage of responses (%Res) were not normally distributed, the analysis was conducted by means of nonparametric tests (Friedman’s ANOVA).
A binomial GLMM analysis was performed to assess the influence of emotion category, human face gender, and sex on the test variable: head-orienting response, with subjects as random a variable. To detect differences between the emotion categories, Fisher’s least significant difference (LSD) pairwise comparisons were performed. In addition, asymmetries at group-level (i.e. emotion category) were assessed via one-sample Wilcoxon signed ranks test, to report significant deviations from zero.
Latency to resume feeding, reactivity, behavioural score and cardiac activity
GLMM analyses was performed to assess the influence of emotion category, human face gender, and sex on the test variable: latency to resume feeding, reactivity, AUC, AAC, and stress behaviours, with subjects as a random variable. To detect differences between the emotion categories Fisher’s least significant difference (LSD), pairwise comparisons were performed.
The experiments were conducted according to the protocols approved by the Italian Minister for Scientific Research in accordance with EC regulations and were approved by the Department of Veterinary Medicine (University of Bari) Ethics Committee EC (Approval Number: 5/15); in addition, before the experiment began, informed consent was obtained from all the participants included in the study.