Ethical approval was obtained from the National Animal Experimentation Ethics Committee for both cats and dogs (PE/EA/1550-5/2019). All methods were carried out in accordance with relevant guidelines and regulations, and the experiment was performed in accordance with the EU Directive 2010/63/EU. Owners provided a written informed consent to voluntarily permit their cats and dogs to participate in the study.
All tested cats were habituated to the test room in an earlier, separate occasion; the criteria for being ready for the testing was that the cat should accept food from an unfamiliar female experimenter or play with her. The habituation time varied between subjects depending on the cat’s behaviour. It could last less than 10 min with a subject that was keen to explore the environment and interact with (play with and/or accept food from) both the owner and experimenter, or until the cat interacted with both persons (up to 29 min). We tested 42 cats, out of which 12 had to be excluded: 1 cat did not leave the box; the owner of 1 cat was not able to hold the cat in her lap; 5 cats tried to escape from the owner’s hands throughout Trial 2; 2 cats looked at the stimuli for less than 1 s in either of the trials; 1 cat looked at the screen for less than 3 s and in the remaining time the owner moved the cat, thus the gaze of the cat could not be coded; and the gazing of 2 cats could not be coded due to the quality of the video recording. Thus, 30 cats were included in the statistical analyses (1 Maine Coon, 1 Siamese and 28 mixed breed cats; 12 females; mean ± SD age 2.4 ± 2.2 years) (see Online Resource 1).
We tested 61 dogs, the adult size of which was less than 40 cm high withers. We excluded 24 dogs because they looked at the screen for less than 1 s in either of the trials: 6 dogs because the owner influenced the dog’s behaviour (e.g. forced the dog to look at the screen and hold its head in one position, or pointed at the screen); 2 dogs due to technical issues; and 1 dog because its gaze could not be coded based on the video recording. Thus, 28 dogs were included in the statistical analyses (1 Bolognese, 1 Chinese crested, 1 dachshund, 2 French bulldogs, 1 Havanese, 1 Maltese, 2 miniature dachshunds, 1 miniature pinscher, 1 miniature poodle, 3 miniature Schnauzers, 1 Papillon, 3 Yorkshire terriers, 2 mixed breed dogs and 8 mongrels; 17 females; mean ± SD age 4.7 ± 3.5 years) (see Online Resource 1).
Subjects were tested at the Department of Ethology, Eötvös Loránd University, in a 5.2 m × 3 m testing room. Tests were recorded with two synchronised cameras. One camera was mounted on the ceiling behind the subjects, focusing on the video displays. The other camera was a 25 frame per second zero lux camera (Sony FDR-AX53) mounted on a compact tripod placed before the screen, equidistant from its sides, focusing on the face of subjects. Infrared LEDs placed next to the camera were directed towards the subjects to improve eye visibility. The projector was mounted on the ceiling behind subjects. Audio was displayed by two speakers centred behind the screen to avoid possible asymmetric cues. Videos were displayed on a screen (2 m × 2.1 m) placed 2.8 m in front of the subject (Fig. 1).
The owner entered the room (door A) along with Experimenter 1 (E1), bringing the cat inside their box and dogs were led in the room on leash. The owner placed the box of the cat on the ground, in the corner next to the door and opened it. Dogs were released from the leash. Cats and dogs could explore the room. After the exploration, the owner sat on a wooden platform covered with artificial grass (H × W × L: 25 cm × 80 cm × 80 cm) and held the subject in his/her lap facing the screen (Fig. 1). E1 adjusted the focus of the zero lux camera to capture the subjects’ face and turned off the lights next to door A. In the case of cats and some dogs (N = 9), E1 stayed there motionless during the display of the video and Experimenter 2 (E2) started the display from the adjacent room. For most dogs (N = 19), there was only one experimenter because the other experimenter could not participate in the testing anymore. Thus, after turning the lights off, E1 left the room and started the video from the adjacent room.
We used the same set of videos as in Abdai et al. (2021) that consisted of the following: (1) 2.32 s-long audiovisual attention grabber directing subjects’ attention to the centre of the screen, (2) 10 s stimulus (Trial 1), (3) plain black screen for 3 s, (4) 2.32 s audiovisual attention grabber, and (5) 10 s stimulus (Trial 2) (see Online Resource 4). All subjects saw one unique video. Videos were generated by the ChasingDots program (developed by Bence Ferdinandy; Abdai et al. 2017b). Stimuli were dependent (henceforth ‘chasing’) and independent movement patterns of two white isosceles triangles presented side by side, over a plain black background separated by a white vertical line in the middle of the screen. In the independent patterns, one figure was a chaser and the other a chasee from two different chasing patterns; thus, the motion dynamics of the chasing and independent patterns were the same. The sides of the chasing and independent patterns were counterbalanced between trials and subjects.
All tests were recorded and subjects’ behaviour was analysed with Solomon Coder 19.08.02 (developed by András Péter: http://solomoncoder.com). Data were analysed by R software version 4.1.1 [R Development Core Team (2021)] in RStudio version 1.4.1717. Backward model selections were carried out using drop1 function; selection was based on the likelihood ratio test (LRT). LRTs of non-significant variables were reported before their exclusion from the models. For significant explanatory variables in the final models, we carried out pairwise comparisons (‘emmeans’ package) and we report contrast estimates (β ± SD).
All videos were coded frame by frame (25 frames per second); for each frame, gaze direction (independent, dependent, away) was determined. Looking duration at the patterns was coded based on eye movements. Inter-coder reliabilities on random subsamples (20% of cats and 20% of dogs) indicated acceptable reliability (mean ± SD Cohen’s kappa: cats, 0.767 ± 0.167; dogs, 0.742 ± 0.053).
Looking duration of subjects was analysed using linear mixed model (LMM; ‘lme4’ package). Residuals of the model were normally distributed after Tukey’s ladder of power transformation (‘rcompanion’ package; lambda 0.65) (Shapiro–Wilk test: W = 0.993, p = 0.351). We estimated the fixed effects of motion pattern (chasing vs independent), trial (trial 1 vs 2) and species (cat vs dog) (three-way interaction). We also tested whether the pattern subjects looked at first in the specific trial or whether the side on which the chasing pattern was displayed had an effect on their looking behaviour. The subjects’ IDs were included as a random intercept to control for within-subject comparison across trials. We also included Trial and Pattern as random slopes to account for the non-independence of the data.
For each trial, we created looking-time curves of cats and dogs to investigate overall within-trial dynamics of gazing at the screen when the stimuli were displayed (Python 3.7.6 in Jupyter Notebook 6.0.3). A single point of a curve represents the proportions of time spent looking at the chasing and independent patterns for every three consecutive frames in the specific trial, separately for cats and dogs. Considering that at the onset of the trial subjects did not look at the stimuli, we only included data points after the proportion values reached 80% of the average proportion of looking time at the stimuli during the specific trial. Linear regression was applied to the data to capture overall trends and estimate slopes (β ± SE).
We counted the frequency of gaze shifts between patterns, irrespective of delays in between. Based on the AIC values (model comparison with ANOVA), Poisson distribution fit the data best (AIC = 476.86; model with the lowest AIC value was kept and a model was considered better whenever ΔAIC was ≥ 2). We carried out generalised linear mixed model (GLMM; ‘lme4’ package) to analyse the data. We estimated the fixed effects of Trial (trial 1 vs 2) and Species (cat vs dog) (two-way interaction), and included the ID of subjects as a random effect.