Abstract
This study aimed to explore healthy, term neonates’ behavioural and physiological responses to music using frame-by-frame analysis of their movements (Experiment 1; N = 32, 0–3 days old) and heart rate measurements (Experiment 2; N = 66, 0–6 days old). A ‘happy’ and ‘sad’ music was first validated by independent raters for their emotional content from a large pool of children’s songs and lullabies, and the effect of the emotions in these two music pieces and a control, no-music condition was compared. The results of the frame-by-frame behavioural analysis showed that babies had emotion-specific responses across the three conditions. Happy music decreased their arousal levels, shifting from drowsiness to sleep, and resulted in longer latencies in other forms of self-regulatory behaviour, such as sucking. The decrease in arousal was accompanied by heart rate deceleration. In the sad music condition, relative ‘stillness’ was observed, and longer leg stretching latencies were measured. In both music conditions, longer latencies of fine motor finger and toe movements were found. Our findings suggest that the emotional response to music possibly emerges very early ontogenetically as part of a generic, possibly inborn, human musicality.
Introduction
Music is often referred to as the language of emotions (Panksepp & Bernatzky, 2002). People listen to music primarily because of its ability to arouse and regulate emotions (Juslin & Laukka, 2004; Koelsch et al., 2019; Reybrouck & Eerola, 2017; Sloboda & Juslin, 2001; Swaminathan & Schellenberg, 2015). However, culture plays a major role in how we play and understand music (Hannon & Trainor, 2007); therefore, it is unclear whether newborns and young children associate music with emotions they feel and if they perceive distinct emotions in music (Nawrot, 2003). When asked to pair emotions detected from facial expressions with music, even children aged 3–4 years fail the task (Dalla Bella et al., 2001; Gregory et al., 1996; Nawrot, 2003). Schubert and McPherson (Schubert & McPherson, 2015) suggested that experiencing emotions while listening to music is a learnt trait and that different emotions have different developmental trajectories.
It was reported that from infancy, humans are sensitive to the structure of music (Trehub, 2003; Trehub et al., 1993). Newborns accurately perceive the beat of music (Winkler et al., 2009) and respond to consonance and dissonance (Masataka, 2006; Perani et al., 2010; Trainor & Heinmiller, 1998), and very young infants are attracted to ‘motherese’, a very musical, melodic, slow, infant-directed speech with exaggerated pitch contours (Trainor et al., 1997). Caregivers sing to babies to put them to sleep, as well as during feeding, changing, and bathing them and playing and travelling with them (Trehub et al., 1997). Babies show focused attention and reduced body movements when their mothers sing (Trehub & Nakata, 2001). This behaviour is not due to learning, as hearing newborns of deaf parents also show a preference for maternal-style singing (Masataka, 1999). Moreover, foetuses pay attention to and remember the music to which they were previously exposed. The earliest foetal responses to sound occur at 16 weeks of gestation (Shahidullah & Hepper, 1994). By the end of the second trimester, the foetus can hear and detect changes in sound (Querleu et al., 1988). Foetuses exhibit distinct behavioural and physiological responses to music in general and to some extent respond to the tempo of music (Tristao et al., 2020). Gebuza et al. (2018a, b) found that when pregnant women at the 28th week of gestation were exposed to 10 min of their favourite songs, foetal heart rate increased 90 s into the song, but there were no changes in foetal body movement. The authors suggested that the delayed responses may have been mediated by the mothers’ emotional reactions, although the mothers themselves showed no changes in heart rate. Another study that examined pregnant women at 37–41 weeks of gestation in three groups —an auditory intervention for the mothers, a classical music auditory intervention for the foetuses, and a control condition—found no significant differences in foetal heart rate between groups (Khoshkholgh et al., 2016). Al-Qahtani (2005) found that foetuses exhibited increased heart rates and motor responses to both music and voice compared to a control condition, although there were no significant differences between responses to music and responses to voice. Overall, although evidence suggests that foetuses respond to musical stimuli, the nature of their responses is unclear.
Regarding newborns, several studies have explored the use of music as a form of intervention for prematurely born babies in neonatal intensive care units. A randomized clinical trial using lullaby, classical music, and no music (control) conditions with 25 preterm infants found that lullabies induced a reduction in both heart and respiratory rates, whereas classical music reduced only the heart rate (Amini et al., 2013). A systematic review (Anderson & Patel, 2018) of the 10 most rigorous studies reported in the literature found that half of them observed behavioural responses, such as reduced crying (Keith et al., 2009), arousal (Loewy et al., 2013), stress (Caine, 1991), and pain (Butt & Kisilevsky, 2000). Only half of them reported significant effects on heart rates or blood pressure (Arnon et al., 2014; Butt & Kisilevsky, 2000; Keith et al., 2009; Loewy et al., 2013; Lorch et al., 1994). Moreover, most of these studies also had dependent measures that did not demonstrate the effects of music, and overall, the effect sizes of the changes are unclear.
Few studies have examined the impact of music on healthy, term neonates with no complications. Winkler et al. (2009) reported that newborns are sensitive to music beat. Kotilahti et al. (2010) used near-infrared spectroscopy and found that speech and music responses in the brain are significantly correlated. Hernandez-Reif et al. (2006) found that lullabies with or without vocals elicited heart rate deceleration, but infants of depressed mothers showed delayed responses compared to those of non-depressed mothers. No studies, however, have examined whether neonates can differentially perceive and respond to emotions in music.
This study aimed to investigate the effects of background music on healthy, term neonates’ behavioural and physiological responses using frame-by-frame analysis of their movements and heart rate. A ‘happy’ and a ‘sad’ music piece was used, validated by independent raters in terms of emotional impact, from a large pool of children’s songs and lullabies. The effects of the emotions in the music have been compared to a no-music control condition. It was expected that newborns respond differently to the two music conditions and to the control condition. Although it is known that young children’s predominant response to music is movement (Metz, 1989), findings on foetuses and young infants are inconclusive regarding the direction and nature of the movement responses. Gebuza et al. found no changes in foetal movements (2018a, b), whereas Al-Qahtani (2005) reported that foetuses exhibited motor responses to both music and voice compared to a control condition. Two other studies (Ilari, 2015; Zentner & Eerola, 2010) observed more rhythmic movements to music than to speech in older infants. Gordon (2013) suggested that infants’ responses to music go through developmental stages, first absorbing music through listening and then moving and babbling, first out of tune and eventually in tune with the music.
Previous studies have also reported conflicting results regarding heart rate. Heart rate increased in Tristao et al.’s study (2020), increased in Gebuza et al.’s(2018a, b) and Al-Qahtani’s (2005) studies, and did not change in Khoshkholgh et al.’s study (2016) with foetuses. Similarly, three studies on premature infants reported heart rate deceleration (Butt & Kisilevsky, 2000; Keith et al., 2009; Loewy et al., 2013), Arnon et al. found no changes (2006), and Lorch et al. found varying changes (1994). Tempo, a characteristic difference between ‘sad’ and ‘happy’ music, is linked to arousal in music across cultures (Schubert & McPherson, 2015). Faster, happy music may induce arousal and an increase in heart rate. If newborns are able to differentiate between ‘happy’ and ‘sad’ music, this might be reflected in heart rate deceleration to sad music and acceleration to happy music.
Validation of the Songs
Methods
Selection of the Music
Prior to the validation of the songs, two experimenters reviewed 126 children’s songs and lullabies from all around the world.
They selected 25 children’s songs and lullabies from all around the world, where both raters agreed on their emotional content as ‘happy’ or ‘sad’.
When validating recognition of emotional expressions in adults, such as with the most widely used the Pictures of Facial Affect (PoFA) (Ekman & Friesen, 1975), happiness was the most reliably recognized emotion with 98.56% accuracy, and unlike other emotions, happiness was not confused with other emotions. In particular, it had a 0% confusion rate with sadness. Given its high recognition rate, happiness is often used as a baseline (Uljarevic & Hamilton, 2013) in studies on emotion recognition from faces. Happiness and sadness were therefore selected as the two basic emotional categories for the pieces.
Ratings of the Songs
Participants
Sixteen participants (11 female and 5 males, Mean age= 25.25 years, SD = 10.76 years), all native English speakers, participated in the validation phase of the songs.
Materials and Procedure
Participants were asked to rate each song on an 11-point Likert scale ranging from − 5 to + 5. ‘− 5’ indicated ‘extremely sad’, 0 as ‘emotionally neutral’, and ‘+ 5’ indicated ‘extremely happy’. Participants were asked to listen to each song from the beginning to the end, and regard it as a whole. They were also asked to disregard the linguistic content as much as possible.
For each song, the mean ratings, range, and standard deviation was calculated (see Table 1).
In the following experiments, as the outcome of this rating, the song with the lowest score was chosen as ‘sad’, a French lullaby entitled ‘Fais Dodo’ (by Alexandra Montano and Ruth Cunningham). The song with the highest score was chosen as a ‘happy' song entitled Das singende Känguru (by von Volker Rosin). The two experimenters who preselected the songs were Hungarian and German native speakers and fluent in English, while the participants in the validation process, were native English speakers. Only six, out of the 25 songs were sung in English, the others in various, non-English languages. While language might have affected the rating process, the final selected pieces were German and French for the happy and sad pieces, respectively. Linguistic elements would seem to be preferably avoided in a pure music-emotion perception study, however, lullabies and children songs traditionally and authentically are sung and are not instrumental, as young infants prefer singing over instrumentation (Ilari & Sundara, 2009). Given the nature of the musical pieces, the linguistic component seems unavoidable, even though language might have had an unknown effect on the ratings.
Experiment 1: Newborns Behavioural Responses to Emotions in Music
Materials and Methods
Participants
The researchers approached the mothers of healthy, singleton newborn infants to take part in the study. Mothers who had no obstetric complications and whose newborns were healthy and required no neonatal intensive care unit observation were invited to participate. Of those, the newborns of mothers who signed an informed consent form were included in the study. All newborns who were tested for this experiment between the periods of October 2006 and January 2007 were included in this study. The sample contained the video-records of 56 babies. However, 15 of the participants were removed from the sample as they were soothed by pacifiers. Additionally 6 participants were excluded as they were too distressed to remain in the experiment and 3 babies had incomplete data. The remaining 32 newborns were included in the final sample (male = 22, female = 10). The babies were 0–3 days old at the time of the experiment (mean age = 2.34, SD = 1.38), born between 36 and 40 gestational weeks (mean GA(Gestational Age) = 38.65, SD = 1.36). All newborns had 8 or above Apgar scores when tested 1, 5 min and 10 min after birth (Apgar, 1952). Newborns’ weight was 2510–4390 g (mean weight = 3473 g, SD = 474 g). Parental informed consent was received from all the mothers whose infants participated in this study. The Department of Obstetrics and Gynaecology of the Albert Szent-Gyorgyi Medical University and ethics board at the University of Dundee approved this research.
Materials
As the result of validation, music titled Das singende Känguru by von Volker Rosin for happy music, and a French lullaby entitled ‘Fais Dodo’ by Alexandra Montano and Ruth Cunningham for sad music were used.
A Sony CD player and speakers were used to administer the music. The length of both compositions was 2 min 54 s. The infants were video-recorded using a Panasonic-240 video camera which was placed on a tripod.
Procedure
The examination was carried out in a separate room that was integral part of the Neonatal Ward. The room had constant illumination and a temperature of 28 °C. Newborns were examined 30–90 min after feeding. Infants were placed in a newborn seat on an examination table. The experimenter, who was the same throughout, stood in front of the infant, approximately 30 cm away. A CD player connected to speakers was placed next to the infant’s seat (see Fig. 1 for illustration). The experiment consisted of three different conditions: a baseline, no-music activity condition, a happy music condition, and a sad music condition. Once the infants were settled, the baseline condition began. The baseline condition was an activity control baseline which was used to gauge the activity level of and state of the infants while no music was played. The order in which the music was administered to the infants was counter-balanced. Throughout the testing, the experimenter only intervened if the infant was showing signs of extreme distress.
Coding
The coding system contained 20 codes (see Table 2). The newborns’ state was coded (Brazelton, 1973; Prechtl, 1974). ‘Awake’ was coded when the infant’s eyes were open and the infant was aware of their surroundings. ‘Drowsy’ was coded when the infant’s eyes were barely open or switching between open and closed and showed little sign of awareness of the environment around them. ‘Sleep’ was coded when the infant’s eyes remained shut and the infant showed no signs of movement. ‘Aroused’ was coded when the infant’s motor movements increased and the movements appeared random. ‘Crying’ was coded when the infant became distressed and began to cry.
Infants self-regulatory behaviour (yawning, sucking) was also coded. ‘Yawning’ was coded when the infants opened their mouth wide and yawned and ‘Sucking’ was coded when the infant began to suck on their fingers, clothes, or their tongue. The coding of arm movements contained 2 separate codes; ‘arm stretch’ was coded when the infant extended their arm away from their body, and ‘Arm flexion’ was coded when the infant’s arm was retracted back towards their body. The coding of the infant’s fingers contained 3 separate codes. ‘Finger movement’ was coded when the fingers moved but were not stretched or clenched. ‘Fingers clenched’ was coded when the infant’s fingers were brought together tightly to form a fist. ‘Fingers stretched’ was coded when the infant’s fingers were extended fully. Furthermore, infant’s ‘Smile’ was also coded when the mouth of the infant moved to form a smile.
‘Leg flexion’ was coded when the infant's leg was retracted back towards their body. ‘Leg stretch’ was coded when the infant's leg was extended away from their body.
Toe movements were also coded. ‘Toes curled’ was coded when the infant’s toes were curled towards their foot. ‘Toes stretched’ was coded when the infant’s toes were fully extended. Lastly, the infant’s eyes were also coded. ‘Eyes open’ was coded when the infant’s eyes were fully open. ‘Eyes barely open’ was coded when the infant’s eyes were open slightly. ‘Eyes closed’ was coded when the infant’s eyes were closed.
The three sections were coded frame-by-frame, using the Noldus Observer-Pro 5.0 system (Noldus Information Technology, 2003). Each condition was coded for 60 s. All behaviours were coded and analysed for both their frequencies and durations, and latencies were calculated by the Observer 9.0 system.
Reliability Coding
Two independent coders who have not been involved in the design of the study, the data collection, and interpretation, coded the data. The coders were blind to the conditions they have been coding. 10% of the data files were coded for reliability in the beginning of the coding, to ensure the coders started with good inter-rater reliability. The overall inter-rater reliability agreement was 77.17% with Cohen’s Kappa of 0.75 for frequency. For duration, the overall inter-rater reliability agreement was 87.10% and 0.86 for Cohen’s Kappa.
During the coding, a further 8% of the data had been reliability coded to ensure the coders stayed reliable. The overall inter-rater reliability agreement was 72.00% with Cohen’s Kappa of 0.70 for frequency. For duration, the overall inter-rater reliability agreement was 75.74% and 0.74 for Cohen’s Kappa.
Statistical Analysis
Frequencies of the behaviours (rate/minute) calculated by the Observer XT-9.0 system (Noldus Information Technology, 2009) were used for statistical analysis. Repeated analysis of variances (ANOVAs) were conducted using SPSS 22.0 for Windows statistical software (SPSS, Inc., Chicago, IL). A p < 0.05 was accepted as significant throughout and p < 0.10 was noted as tendency.
Results
Repeated ANOVAs were used to test the effects of the three conditions (Happy music, Sad music, Baseline) on the frequencies, durations, and latencies of the behaviours of the newborns. When Mauchley’s tests indicated a violation of the assumption of sphericity, degrees of freedom were corrected using Greenhouse–Geisser sphericity estimates.
Frequency of Drowsiness
The data showed a significant main effect of the conditions, F(1.594, 49.418) = 4.230, p = 0.028, ηp2 = 0.120. Pairwise comparison showed that the frequency of drowsiness was significantly lower during happy music compared to baseline, and at a tendency level when compared to sad music. Drowsiness during baseline and sad music were not significantly different. See Table 3 and Fig. 2.
Duration of Sleepiness
The data showed a significant main effect of the conditions, F(2,62) = 3.189, p = 0.048, ηp2 = 0.093. Pairwise comparison showed that the duration of drowsiness was significantly higher during happy music compared to sad music, and at a tendency level when compared to baseline. Drowsiness during baseline and sad music were not significantly different. See Table 3 and Fig. 3.
Frequency of Eyes Open
The data showed a significant main effect of the conditions, F(2,62) = 3.773, p = 0.028, ηp2 = 0.108. Pairwise comparison showed that the frequency of eyes open was significantly lower in both sad and happy music to baseline. Eyes open frequencies during happy and sad music were not significantly different. See Table 3 and Fig. 4.
Frequency of Eyes Closed
The data showed a significant main effect of the conditions, F(1.565, 48.519) = 8.723, p = 0.001, ηp2 = 0.220. Pairwise comparison showed that the frequency of eyes closed was significantly lower in both sad and happy music to baseline. Eyes closed frequencies during happy and sad music were not significantly different. See Table 3 and Fig. 5.
Latency of Leg Stretching
The data showed a significant main effect of the conditions, F(2,62) = 3.579, p = 0.034, ηp2 = 0.104. Pairwise comparison showed that the latencies of leg stretching was the longest during sad music, significantly longer than during baseline and at a tendency level compared to happy music. The latencies of leg stretches were comparable during happy music and baseline. See Table 3 and Fig. 6.
Latency of Toe Curling
The data showed a significant main effect of the conditions, F(2,62) = 3.678, p = 0.031, ηp2 = 0.106. Pairwise comparison showed that the latencies of toe curling were significantly higher in both during sad and happy music compared to baseline. The latencies of toe curling were comparable during sad and happy music. See Table 3 and Fig. 7.
Latency of Finger Movements
The data showed a significant main effect of the conditions, F(1.807, 56.024) = 5.754, p = 0.007, ηp2 = 0.157. Pairwise comparison showed that the latencies of finger movements were significantly higher in both during sad and happy music compared to baseline. The latencies of finger movements were comparable during sad and happy music. See Table 3 and Fig. 8.
Latency of Sucking
The data showed a significant main effect of the conditions, F(2,62) = 3.717, p = 0.036, ηp2 = 0.102. Pairwise comparison showed that the latency of sucking was longer in the happy music compared to the baseline and at a tendency level compared to the sad music. The latencies of sucking were comparable during sad music and baseline. See Table 3 and Fig. 9.
No other coded behaviours showed a significant main effect of condition.
Experiment 2: Heart Rate Changes During Sad, Happy Music and Baseline
Materials and Methods
Participants
The study took place between October 2006 and April 2007. The researchers approached the mothers of healthy, singleton newborn infants to take part in the study. Mothers who had no obstetric complications and whose newborns were healthy and required no neonatal intensive care unit observation were invited to participate. Of those, the newborns of mothers who signed an informed consent form were included in the study.
Seventy-seven newborn infants were recruited and eleven were excluded due to incomplete data or crying. Data of 66 newborns (45 boys, 22 girls) were analysed for heart rate. The average age of the babies was 2.28 days (SD = 1.22, range 0–6 days, the youngest 2 h old) and they were born on average at 38.71 gestational weeks (SD = 1.31; 36–41 weeks) with an average weight of 3439 g (SD = 447 g, 2600–4390 g). Thirty newborns were born by vaginal deliveries and 36 by caesarean section. All babies had 8 or above Apgar scores 5 min after birth. The study has been reviewed and approved by the Ethical Committees of the University of Dundee and the Albert Szent-Györgyi Medical University, Szeged. The data collection happened at the Neonatal Ward of the Obstetrics and Gynaecology Clinic at University of Szeged, while the analysis of the data was done at the University of Dundee, Scotland.
Procedure
The examination was carried out in a separate room that was integral part of the Neonatal Ward. The room had constant illumination and a temperature of 28 °C. Newborns were examined 30–90 min after feeding. Infants sat in a newborn seat placed on an examination table. The ECG of the neonates was recorded using the Biosemi Active-2 System using chest electrodes, with 1 ms accuracy. The R-R intervals from the collected continuous ECG data have been calculated and converted to heart rate data. The biosemi raw ‘bdf’ data files were then processed and converted to text files using MATLAB software. The resulting time-line—heart rate data was analysed, synchronized with the music. The average heart rate during the three conditions had been calculated.
Repeated Analysis of Variances (ANOVAs) were conducted using SPSS 22.0 for Windows statistical software (SPSS, Inc., Chicago, IL), and a p < 0.05 was accepted as significant throughout.
Results
A repeated ANOVA was used to test the effects of the three conditions (Happy music, Sad music, Baseline) on the heart rate of the newborns. The data showed a significant main effect of the conditions, F(1.826,118.662) = 5.299, p = 0.008, ηp2 = 0.075. Pairwise comparison showed that heart rate during happy music was significantly lower (mean = 125.209 beat/min, SE = 1.703) compared to sad music (mean = 127.749 beat/min, SE = 1.761, p = 0.009) and also compared to the baseline (mean = 127.428 beat/min, SE = 1.732, p = 0.003). Heart rate during sad music and baseline was not significantly different (p = 0.716). See Fig. 10.
Discussion
This study aimed to explore healthy, term neonates’ behavioural and physiological responses to background music using frame-by-frame analysis of their movements and heart rate measurements. The frame-by-frame analysis showed that arousal levels decreased by decreasing drowsiness while increasing sleep when the babies were exposed to happy music but not when exposed to sad or no music. Eye movements decreased in both music conditions compared to the baseline, indicating that music had a calming effect on eye movement.
In terms of gross motor movements, relative ‘stillness’ was found by increased latency measures. Leg stretching, which is usually a sign of distress (Miller & Quinn-Hurst, 1994), was observed with longer latencies in the sad music than in the happy music and baseline conditions. Fine motor movement, toe movement, toe curling, and finger movement latencies were longer in both the sad and happy music conditions than in the baseline condition. Non-nutritive sucking, a self-regulatory behaviour, had longer latency in the happy music than in the sad music and control conditions. Heart rates decreased in the happy music but not in the sad or the baseline condition.
Overall, babies showed limited evidence of emotion-specific responses across the conditions. Happy music decreased their arousal levels, shifting drowsiness to sleep, and resulted in longer latencies of other forms of self-regulatory behaviour, such as sucking. Moreover, heart rate deceleration accompanied the decrease in arousal. In the sad music condition, relative ‘stillness’ and longer leg stretching latencies were observed, but no other measures were specific to it.
It is important to note that the music pieces were validated for their emotional content, and the emotions were determined by musical elements, such as tempo. Tempo is a characteristic difference between ‘sad’ and ‘happy’ music and is linked to music-induced arousal across cultures (Schubert & McPherson, 2015) and even foetuses, to some extent, respond to music tempo (Tristao et al., 2020). The current study aimed to explore how neonates responded to music that was perceived by adults as ‘happy’ or sad’ and not musical elements that are associated with emotions in music. It has been shown that it is mainly the pitch, rather than the tempo that affects emotional ratings in music, such as happy, sad or scary, and if rhythm affected the ratings, it always interacted with changes in pitch (Schellenberg et al., 2000). Further studies should examine the role of structural musical elements as they interact with emotional valence in case of lullabies and children’s songs.
Overall, our results do not support the view that young children’s predominant response to music is movement (Metz, 1989). Rather, they confirm Gordon’s (2013) view that infants first absorb music through listening and then start to respond to it by moving. Indeed, both happy and sad music decreased eye movements and increased the latencies of fine motor finger and toe movements compared to the baseline, suggesting a subtle decrease in arousal in both music conditions, with the happy music having stronger effects, manifested in lower arousal levels, increased gross motor movement latency, and decreased heart rates. Whereas Keith et al. observed reduced crying (2009), and Loewy et al. (2013) observed decreased arousal in preterm infants when listening to music, most studies have found no changes in arousal levels (Anderson & Patel, 2018). Changes in arousal levels may have been missed by previous studies that did not code all stages of arousal (Brazelton, 1973; Prechtl, 1974). Anderson and Patel’s review (Anderson & Patel, 2018) showed that only half of the included studies found significant effects related to heart rate or blood pressure (Arnon et al., 2014; Butt & Kisilevsky, 2000; Keith et al., 2009; Loewy et al., 2013; Lorch et al., 1994). Some studies on foetuses reported increased heart rates (Al-Qahtani, 2005; Gebuza et al., 2018a, b; Tristao et al., 2020), while another found (Khoshkholgh et al., 2016) no changes. Similarly, some studies on premature infants reported decreased heart rates (Butt & Kisilevsky, 2000; Keith et al., 2009; Loewy et al., 2013), whereas others observed no changes (Arnon et al., 2006) or varying changes (Lorch et al., 1994). Our results are more in line with Hernandez-Reif et al. (2006), who found that lullabies with or without vocals elicited heart rate deceleration in newborns, although we observed heart rate changes only with the ‘happy’ music and not with the ‘sad’ lullaby.
Responses to music, however, are likely to have been shaped before birth. When 48 third trimester foetuses were exposed to classical music, significant changes were observed in the number of uterine contractions and foetal movements (Gebuza et al., 2018a). It was suggested that listening to classical music could even prevent premature deliveries. Foetuses who were exposed to a set of musical pieces from the 32nd gestational week had more foetal movements and heart rate decelerations to the music at the 38th gestational week (Wilkin, 1995). Moreover, the effect continued after the baby was born, and babies preferred the music played to them while intrauterine, and they continued to display more movement while listening to the pieces than the control group did at 6 weeks (Wilkin, 1995).
In future studies on neonatal and infant responses to music, prenatal experiences and home music should also be evaluated and taken into account. Given the auditory and cognitive abilities of the foetus (Marx & Nagy, 2015; Nagy et al., 2021), and previous data (Wilkin, 1995) it is likely that regular musical input while in the womb, might shape the neonate’s responses to music. Music therapy as a soothing, or as needed, stimulating intervention could be used more often in neonatal wards, including for counteracting the distress of hospitalization (Gebuza et al., 2018a, b). Traditionally, lullabies are sung by the caretakers, usually the mothers. It is known that infants can identify emotions many months earlier when the vocal and facial information are congruently matching as opposed to mismatching or unimodal input (Walker-Andrews, 1997). Mothers often report that long forgotten lullabies, heard from their own grandmothers, suddenly come to their memory when singing to their own babies. Mothers singing lullabies and personally relevant songs is a musically and emotionally congruent message, and is likely to affect the young infant’s psychophysiological state (Malloch & Trevarthen, 2009) much before the babies cognitively categorize and recognize the distinct emotional cues from faces and voices.
Conclusion
Our findings suggest that newborns reacted to music, mainly showing signs of decreased arousal and possibly increased attention. Moreover, they responded differently to the ‘happy’ and ‘sad’ music and no-music conditions, with the fast ‘happy’ music eliciting the strongest responses. Responses to the happy music are reminiscent of the stage that Gordon (2013) described as ‘absorbing’ the music, with decreased heart rates and reduced arousal levels.
It has been suggested that singing to babies serves an emotional communicative function long before language emerges (Rock et al., 1999; Trehub et al., 1997) and promotes attachment more than playing without music (Vlismas et al., 2013). Emotional communication via music possibly emerges very early ontogenetically as part of a generic, possibly inborn, human musicality (Savage et al., 2020). This study provides evidence that newborns react differently to happy and sad music. However, it is likely that maternal singing with different emotional content is more powerful in evoking emotion-specific behavioural and physiological responses. Therefore, future studies could investigate the emotional and physiological responses of neonates to maternal singing.
Data Availability
Data can be available by request from the correspondent author.
References
Al-Qahtani, N. H. (2005). Foetal response to music and voice. Australian and New Zealand Journal of Obstetrics and Gynaecology, 45(5), 414–417. https://doi.org/10.1111/j.1479-828X.2005.00458.x
Amini, E., Rafiei, P., Zarei, K., Gohari, M., & Hamidi, M. (2013). Effect of lullaby and classical music on physiologic stability of hospitalized preterm infants: A randomized trial. Journal of Neonatal-Perinatal Medicine, 6(4), 295–301.
Anderson, D. E., & Patel, A. D. (2018). Infants born preterm, stress, and neurodevelopment in the neonatal intensive care unit: Might music have an impact? Developmental Medicine and Child Neurology, 60(3), 256–266.
Apgar, V. (1952). A proposal for a new method of evaluation of the newborn. Classic Papers in Critical Care, 32(449), 97.
Arnon, S., Diamant, C., Bauer, S., Regev, R., Sirota, G., & Litmanovitz, I. (2014). Maternal singing during kangaroo care led to autonomic stability in preterm infants and reduced maternal anxiety. Acta Paediatrica, 103(10), 1039–1044.
Arnon, S., Shapsa, A., Forman, L., Regev, R., Bauer, S., Litmanovitz, I., & Dolfin, T. (2006). Live music is beneficial to preterm infants in the neonatal intensive care unit environment. Birth, 33(2), 131–136. https://doi.org/10.1111/j.0730-7659.2006.00090.x
Brazelton, T. (1973). Neonatal behavioral Assessment Scale: Clinics in developmental medicine, No. 50. Spastics International Medical Publications. JB Lippincott Co., Philadelphia.
Butt, M. L., & Kisilevsky, B. S. (2000). Music modulates behaviour of premature infants following heel lance. Canadian Journal of Nursing Research Archive.
Caine, J. (1991). The effects of music on the selected stress behaviors, weight, caloric and formula intake, and length of hospital stay of premature and low birth weight neonates in a newborn intensive care unit. Journal of Music Therapy, 28(4), 180–192.
Dalla Bella, S., Peretz, I., Rousseau, L., & Gosselin, N. (2001). A developmental study of the affective value of tempo and mode in music. Cognition, 80(3), B1–B10.
Ekman, P., & Friesen, W. V. (1975). Pictures of facial affect: Consulting. Psychologists Press.
Gebuza, G., Zaleska, M., Kaźmierczak, M., Mieczkowska, E., & Gierszewska, M. (2018a). The effect of music on the cardiac activity of a fetus in a cardiotocographic examination. Advances in Clinical and Experimental Medicine, 27(5), 615–621. https://doi.org/10.17219/acem/68693
Gebuza, G., Zaleska, M., Kaźmierczak, M., Mieczkowska, E., & Gierszewska, M. (2018b). The effect of music on the cardiac activity of a fetus in a cardiotocographic examination. Advances in Clinical and Experimental Medicine, 27(5), 615–621. https://doi.org/10.17219/acem/68693
Gordon, E. (2013). Music learning theory for newborn and young children.
Gregory, A. H., Worrall, L., & Sarge, A. (1996). The development of emotional responses to music in young children. Motivation and Emotion, 20(4), 341–348.
Hannon, E. E., & Trainor, L. J. (2007). Music acquisition: Effects of enculturation and formal training on development. Trends in Cognitive Sciences, 11(11), 466–472.
Hernandez-Reif, M., Diego, M., & Field, T. (2006). Instrumental and vocal music effects on EEG and EKG in neonates of depressed and non-depressed mothers. Infant Behavior and Development, 29(4), 518–525.
Ilari, B. (2015). Rhythmic engagement with music in early childhood: A replication and extension. Journal of Research in Music Education, 62(4), 332–343.
Ilari, B., & Sundara, M. (2009). Music listening preferences in early life:Infants’ responses to accompanied versus unaccompanied singing. Journal of Research in Music Education, 56(4), 357–369. https://doi.org/10.1177/0022429408329107
Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 33(3), 217–238.
Keith, D. R., Russell, K., & Weaver, B. S. (2009). The effects of music listening on inconsolable crying in premature infants. Journal of Music Therapy, 46(3), 191–203.
Khoshkholgh, R., Keshavarz, T., Moshfeghy, Z., Akbarzadeh, M., Asadi, N., & Zare, N. (2016). Comparison of the effects of two auditory methods by mother and fetus on the results of non-stress test (baseline fetal heart rate and number of accelerations) in pregnant women: A randomized controlled trial. Journal of Family & Reproductive Health, 10(1), 27–34.
Koelsch, S., Bashevkin, T., Kristensen, J., Tvedt, J., & Jentschke, S. (2019). Heroic music stimulates empowering thoughts during mind-wandering. Scientific Reports, 9(1), 10317. https://doi.org/10.1038/s41598-019-46266-w
Kotilahti, K., Nissilä, I., Näsi, T., Lipiäinen, L., Noponen, T., Meriläinen, P., Huotilainen, M., & Fellman, V. (2010). Hemodynamic responses to speech and music in newborn infants. Human Brain Mapping, 31(4), 595–603.
Loewy, J., Stewart, K., Dassler, A.-M., Telsey, A., & Homel, P. (2013). The effects of music therapy on vital signs, feeding, and sleep in premature infants. Pediatrics, 131(5), 902–918.
Lorch, C. A., Lorch, V., Diefendorf, A. O., & Earl, P. W. (1994). Effect of stimulative and sedative music on systolic blood pressure, heart rate, and respiratory rate in premature infants. Journal of Music Therapy, 31(2), 105–118.
Malloch, S., & Trevarthen, C. (2009). Communicative musicality. Exploring the Basis of Human Companionship, 1752–0118, 2009.
Marx, V., & Nagy, E. (2015). Fetal behavioural responses to maternal voice and touch. PLoS ONE, 10(6), e0129118.
Masataka, N. (1999). Preference for infant-directed singing in 2-day-old hearing infants of deaf parents. Developmental Psychology, 35(4), 1001.
Masataka, N. (2006). Preference for consonance over dissonance by hearing newborns of deaf parents and of hearing parents. Developmental Science, 9(1), 46–50.
Metz, E. (1989). Movement as a musical response among preschool children. Journal of Research in Music Education, 37(1), 48–60.
Miller, M. Q., & Quinn-Hurst, M. (1994). Neurobehavioral assessment of high-risk infants in the neonatal intensive care unit. American Journal of Occupational Therapy, 48(6), 506–513.
Nagy, E., Thompson, P., Mayor, L., & Doughty, H. (2021). Do foetuses communicate? Foetal responses to interactive versus non-interactive maternal voice and touch: An exploratory analysis. Infant Behavior and Development, 63, 101562. https://doi.org/10.1016/j.infbeh.2021.101562
Nawrot, E. S. (2003). The perception of emotional expression in music: Evidence from infants, children and adults. Psychology of Music, 31(1), 75–92.
Noldus Information Technology. (2003). The observer reference manual, version 5.0. Wageningen, The Netherlands.
Noldus Information Technology. (2009). The observer XT reference manual, version 9.0. Wageningen, The Netherlands.
Panksepp, J., & Bernatzky, G. (2002). Emotional sounds and the brain: The neuro-affective foundations of musical appreciation. Behavioural Processes, 60(2), 133–155.
Perani, D., Saccuman, M. C., Scifo, P., Spada, D., Andreolli, G., Rovelli, R., RBaldoli, C., & Koelsch, S. (2010). Functional specializations for music processing in the human newborn brain. Proceedings of the National Academy of Sciences, 107(10), 4758–4763.
Prechtl, H. F. (1974). The behavioural states of the newborn infant (a review). Brain Research, 76(2), 185–212.
Querleu, D., Renard, X., Versyp, F., Paris-Delrue, L., & Crèpin, G. (1988). Fetal hearing. European Journal of Obstetrics & Gynecology and Reproductive Biology, 28(3), 191–212.
Reybrouck, M., & Eerola, T. (2017). Music and its inductive power: A psychobiological and evolutionary approach to musical emotions. Frontiers in Psychology, 8, 494.
Rock, A. M., Trainor, L. J., & Addison, T. L. (1999). Distinctive messages in infant-directed lullabies and play songs. Developmental Psychology, 35(2), 527.
Savage, P. E., Loui, P., Tarr, B., Schachner, A., Glowacki, L., Mithen, S., & Fitch, W. T. (2020). Music as a coevolved system for social bonding. Behavioral and Brain Sciences, 44, 1–36.
Schellenberg, E. G., Krysciak, A. M., & Campbell, R. J. (2000). Perceiving emotion in melody: Interactive effects of pitch and rhythm. Music Perception, 18(2), 155–171. https://doi.org/10.2307/40285907
Schubert, E., & McPherson, G. E. (2015). Underlying mechanisms and processes in the development of emotion perception in music. The child as musician: A handbook of musical development.
Shahidullah, S., & Hepper, P. G. (1994). Frequency discrimination by the fetus. Early Human Development, 36(1), 13–26.
Sloboda, J. A., & Juslin, P. N. (2001). Music and emotion: Theory and research. Oxford University Press.
Swaminathan, S., & Schellenberg, E. G. (2015). Current emotion research in music psychology. Emotion Review, 7(2), 189–197.
Trainor, L. J., Clark, E. D., Huntley, A., & Adams, B. A. (1997). The acoustic basis of preferences for infant-directed singing. Infant Behavior and Development, 20(3), 383–396.
Trainor, L. J., & Heinmiller, B. M. (1998). The development of evaluative responses to music: Infants prefer to listen to consonance over dissonance. Infant Behavior and Development, 21(1), 77–88.
Trehub, S. E., Trainor, L. J., & Unyk, A. M. (1993). Music and speech processing in the first year of life. In Advances in child development and behavior (Vol. 24, pp. 1–35): Elsevier.
Trehub, S. E. (2003). The developmental origins of musicality. Nature Neuroscience, 6(7), 669–673. https://doi.org/10.1038/nn1084
Trehub, S. E., & Nakata, T. (2001). Emotion and music in infancy. Musicae Scientiae, 5, 37–61.
Trehub, S. E., Unyk, A. M., Kamenetsky, S. B., Hill, D. S., Trainor, L. J., Henderson, J. L., & Saraza, M. (1997). Mothers’ and fathers’ singing to infants. Developmental Psychology, 33(3), 500.
Tristao, R. M., de Jesus, J. A. L., de Lima Lemos, M., & Freire, R. D. (2020). Foetal music perception: A comparison study between heart rate and motor responses assessed by APIB Scale in Ultrasound Exam.
Vlismas, W., Malloch, S., & Burnham, D. (2013). The effects of music and movement on mother–infant interactions. Early Child Development and Care, 183(11), 1669–1688.
Walker-Andrews, A. S. (1997). Infants’ perception of expressive behaviors: Differentiation of multimodal information. Psychological Bulletin, 121(3), 437.
Wilkin, P. E. (1995). A comparison of fetal and newborn responses to music and sound stimuli with and without daily exposure to a specific piece of music. Bulletin of the Council for Research in Music Education, 127, 163–169.
Winkler, I., Háden, G. P., Ladinig, O., Sziller, I., & Honing, H. (2009). Newborn infants detect the beat in music. Proceedings of the National Academy of Sciences, 106(7), 2468–2471.
Zentner, M., & Eerola, T. (2010). Rhythmic engagement with music in infancy. Proceedings of the National Academy of Sciences, 107(13), 5768–5773.
Acknowledgements
The Authors thank Dezsone Nagy for her help in managing the database and the nurses of the clinic for their assistance. EN was supported by RF-2020-552 Research Fellowship from the Leverhulme Trust while writing the manuscript.
Funding
The research has not been funded. EN’s time was supported by RF-2020-552 Research Fellowship from the Leverhulme Trust while writing the manuscript.
Author information
Authors and Affiliations
Contributions
EN designed the study, tested the babies, coded the videos, analyzed the data, and wrote up the study. EN, RC, and NR designed the coding system, coded the data, and contributed to the data analysis. TE and EN carried out the validation study for the materials. OH recruited the sample, contributed to the design, and the testing. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Consent to Participate
Mothers signed an informed consent before the experiment began.
Consent for Publication
The mothers of the babies gave their written consent for the pictures to be used for publication purposes.
Ethics Approval
The Department of Obstetrics and Gynaecology of the Albert Szent-Gyorgyi Medical University and ethics board at the University of Dundee approved this research.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Nagy, E., Cosgrove, R., Robertson, N. et al. Neonatal Musicality: Do Newborns Detect Emotions in Music?. Psychol Stud 67, 501–513 (2022). https://doi.org/10.1007/s12646-022-00688-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12646-022-00688-1