Measuring the Happiness of Large-Scale Written Expression: Songs, Blogs, and Presidents
- First Online:
- Cite this article as:
- Dodds, P.S. & Danforth, C.M. J Happiness Stud (2010) 11: 441. doi:10.1007/s10902-009-9150-9
- 6.5k Downloads
The importance of quantifying the nature and intensity of emotional states at the level of populations is evident: we would like to know how, when, and why individuals feel as they do if we wish, for example, to better construct public policy, build more successful organizations, and, from a scientific perspective, more fully understand economic and social phenomena. Here, by incorporating direct human assessment of words, we quantify happiness levels on a continuous scale for a diverse set of large-scale texts: song titles and lyrics, weblogs, and State of the Union addresses. Our method is transparent, improvable, capable of rapidly processing Web-scale texts, and moves beyond approaches based on coarse categorization. Among a number of observations, we find that the happiness of song lyrics trends downward from the 1960s to the mid 1990s while remaining stable within genres, and that the happiness of blogs has steadily increased from 2005 to 2009, exhibiting a striking rise and fall with blogger age and distance from the Earth’s equator.
KeywordsHappiness Hedonometer Measurement Emotion Written expression Remote sensing Blogs Song lyrics State of the Union addresses
The desire for well-being and the avoidance of suffering arguably underlies all behavior (Argyle 2001; Layard 2005; Gilbert 2006; Snyder and Lopez 2009). Indeed, across a wide range of cultures, people regularly rank happiness as what they want most in life (Argyle 2001; Layard 2005; Lyubomirsky 2007) and numerous countries have attempted to introduce indices of well-being, such as Bhutan’s National Happiness Index. Such a focus is not new: Plato held that achieving eudaimonia (flourishing) was an individual’s true goal (Jones 1970), Bentham’s hedonistic calculus and John Stuart Mill’s refinements (Russell 1961) sought to codify collective happiness maximization as the determinant of all moral action, and in the United States Declaration of Independence, Jefferson famously asserted the three unalienable rights of ‘life, liberty, and the pursuit of happiness.’
In recognizing the importance of quantifying well-being, we have seen substantial interest and progress in measuring how individuals feel in a wide range of contexts, particularly in the fields of psychology (Osgood et al. 1957; Csikszentmihalyi et al. 1977; Csikszentmihalyi 1990; Gilbert 2006) and behavioral economics (Kahneman et al. 2004; Layard 2005). Most methods, such as experience sampling (Conner Christensen et al. 2003) and day reconstruction (Kahneman et al. 2004), are based on self-reported assessments of happiness levels and are consequently invasive to some degree; dependent on memory and self-perception, which degrades reliability (Killworth and Bernard 1976); likely to induce misreporting (Martinelli and Parker 2009); and limited to small sample sizes due to costs.
Complementing these techniques, we would ideally also have some form of transparent, non-reactive, population-level hedonometer (Edgeworth 1881) which would remotely sense and quantify emotional levels, either post hoc or in real time (Mishne and de Rijke 2005). Our method for achieving this goal based on large-scale texts is to use human evaluations of the emotional content of individual words within a given text to generate an overall score for that text. Our method could be seen as a form of data mining (Witten and Frank 2005; Tan et al. 2005), but since it involves human assessment and not just statistical or machine learning techniques, could be more appropriately classed as ‘sociotechnical data mining.’ In what follows, we explain the evaluations we use, how we combine these evaluations in analysing written expression, and address various issues concerning our measure.
For human evaluations of the ‘happiness’ level of individual words, we draw directly on the Affective Norms for English Words (ANEW) study (Bradley and Lang 1999). For this study, participants graded their reactions to a set of 1034 words with respect to three standard semantic differentials (Osgood et al. 1957) of good-bad (psychological valence), active-passive (arousal), and strong-weak (dominance) on a 1–9 point scale with half integer increments. The specific words tested had been previously identified as bearing meaningful emotional content (Mehrabian and Russell 1974; Bellezza et al. 1986). Here, we focus specifically on ratings of psychological valence. [We note that other scales are possible, for example ones that do not presume a single dimension of good-bad, but rather independent scales for good and bad (Diener and Emmons 1984).]
Of great utility to our present work was the study’s explanation of the psychological valence scale to participants as a ‘happy-unhappy scale.’ Participants were further told that “At one extreme of this scale, you are happy, pleased, satisfied, contented, hopeful. …The other end of the scale is when you feel completely unhappy, annoyed, unsatisfied, melancholic, despaired, or bored” (Bradley and Lang 1999). We can thus reasonably take the average psychological valence scores for the ANEW study words as measures of average happiness experienced by a reader. For consistency with the literature, we will use the term valence for the remainder of the paper.
Our general focus is thus on quantifying how writings are received rather than on what an author may have intended to convey emotionally. Nevertheless, as we discuss below, we attempt to understand the latter with our investigations of blogs.
In using the ANEW data set, we also take the viewpoint that direct human assessment remains, in many complex contexts, superior to artificial intelligence methods. Describing the content of an image, for example, remains an extremely difficult computational problem, yet is trivial for people (von Ahn 2006).
Since our method does not account for the meaning of words in combination, it is suitable only for large-scale texts. We argue that the results from even sophisticated natural language parsing algorithms (Riloff and Wiebe 2003) cannot be entirely trusted for small-scale texts, as individual expression is simply too variable (Lee 2004) and must therefore be viewed over long time scales (or equivalently via large-scale texts). Problematically, the desired scalability is a barrier for such parsing algorithms which run slowly and still suffer from considerable inaccuracy. With our method based on the ANEW data set, we are able to collect and rapidly analyze very large corpuses, giving strength to any statistical assessment. Indeed, with advances in cloud computing, we see no practical limit to the size of meaningful corpuses we can analyse.
A key aspect of our method is that it allows us to quantify happiness on a continuum. By comparison, previous analyses have focused on differences in frequency of words belonging to coarse, broad categories (Cohn et al. 2004), such as ‘negative emotion’, ‘no emotion’, and ‘positive emotion’. For example, using a category-based approach and covering a smaller scale in time and population size than we do here, studies of blogs over a single day have found that content and style vary with age and gender, suggesting automated identification of author demographics is feasible (Schler et al. 2006). However, comparisons between data sets using broad categorical variables are not robust, even if the categories can be ordered. Consider two texts that have the same balance of positive and negative emotion words. Without a value of valence for individual words, we are unable to distinguish further between these texts, which may easily be distinct in emotional content. By using the ANEW data set, we are able to numerically quantify emotional content in a principled way that can be refined with future studies of human responses to words.
2 Description of Large-Scale Texts Studied
We use our method to study four main corpuses: song lyrics, song titles, blog sentences written in the first person and containing the word “feel”, and State of the Union addresses. Before exploring valence patterns in depth for these data sets, we first provide some summary statistics relevant to our particular interests, and we also detail our sources.
Total number of words in each corpus along with the number and percentage of words found in the ANEW database
Top five most frequently occurring ANEW words in each corpus with frequency expressed as a percentage of all ANEW words
We obtained our four data sets as follows. We downloaded lyrics to 232,574 songs composed by 20,025 artists between 1960 and 2007 from the website http://www.hotlyrics.net and tagged them with their release year and genre using the Compact Disc Data Base available online at http://www.freedb.org. We separately obtained from http://www.freedb.org a larger database of song titles and genre classifications. Starting August, 2005, first person sentences using the word feel (or a conjugated form) were extracted from blogs and made available through the website http://www.wefeelfine.org, via a public API (Harris and Kamvar 2009). Demographic data was furnished by the site when available. These sentences appeared in over 2.3 million unique blogs during a 47 month span starting in August 2005. In total, we retrieved 9,113,772 sentences which appeared during the period August 26, 2005 to June 30, 2009, inclusive. For each day, we removed repeat sentences of six words or more to eliminate substantive copied material. We obtained State of the Union messages from the American Presidency Project at http://www.presidency.ucsb.edu. Finally, we accessed the British National Corpus at http://www.natcorp.ox.ac.uk.
Four basic possibilities arise for each word’s contribution, as indicated by the key in Fig. 5. A word may have higher or lower valence than the average of text a, and it may also increase or decrease in relative abundance. Further, the contribution of word i to Δi(b,a) will be 0 if either the relative prevalences are the same, or the average valence of word i matches the average of text a. Note that Δi(b,a) is not symmetric in b and a and is meant only to be used to describe one text (b) with respect to another (a).
Ranking words according to the above definition of Δi gives us Fig. 5. We see that the decrease in average valence for lyrics after 1980 is due to a loss of positive words such as ‘love’, ‘baby’, and ‘home’ (italicized and in red) and a gain in negative words such as ‘hate’, ‘pain’, and ‘death’ (normal font and in blue). These drops are countered by the trends of less ‘lonely’ and ‘sad’, and more ‘life’ and ‘god’. The former dominates the latter and the average valence decreases from approximately 6.4 to 6.1. Even though the contribution of ‘love’ is clearly the largest, the overall drop is due to changes in many word frequencies. And while we are unable to assess words for which we do not have valence, we can make qualitative observations. For example, the word ‘not’, a generally negative word, accounts for 0.22% of all words prior to 1980 and 0.28% of all words after 1980, in keeping with the overall drop in valence.
Average valence scores for the top and bottom 10 artists for which we have the lyrics to at least 50 songs and at least 1000 samples of (nonunique) words from the ANEW study word list
All 4 One
S Club 7
K Ci & JoJo
Diana Ross & the Supremes
Black Label Society
The Beach Boys
While of considerable intrinsic interest, song lyrics of popular music provide us with a limited reflection of society’s emotional state, and we move now to exploring more directly the valence of human expression. The proliferation of personal online writing such as blogs gives us the opportunity to measure emotional levels in real time. At the end of 2008, the blog tracking website http://www.technorati.com reported it had indexed 133 million blog records. Blogger demographics are broad with an even split between genders and high racial diversity with some skew towards the young and educated (Lenhart and Fox 2006).
We have examined nearly 10 million blog sentences retrieved via the website http://www.wefeelfine.org, as we have described in detail above. In focusing on this subset of sentences, we are attempting to use our valence measures not only to estimate perceived valence but also the revealed emotional states of blog authors. We are thus able to present results from what might be considered a very basic remote-sensing hedonometer.
We highlight a number of specific dates which most sharply depart from their month’s average: Christmas Day; Valentine’s Day; September 11, 2006, the fifth anniversary of the World Trade Center and Pentagon attacks in the United States; September 10 in other years; the US Presidential Election, November 4, 2008; the US Presidential Inauguration, January 20, 2009; and the day of Michael Jackson’s death, June 25, 2009 (June 26 and 27 were also equally low).
Christmas Day and Valentine’s Day are largely explained by the increase in frequency of the words Christmas and Valentine, both part of the ANEW word list. But other words contribute strongly. For Christmas Day, there is more ‘family’ and less ‘pain’, with an increase in ‘guilty’ going against the trend. As shown for Valentine’s day in 2008 in the second panel of Fig. 8, ‘love’ and ‘people’ are more prevalent, ‘hate’ and ‘pain’ less so, countervailed by more ‘sad,’ ‘lonely,’ and ‘bored.’
The strongest word driving the spike in valence for the 2008 US Election, the happiest individual day in the entire dataset, is ‘proud’ (third panel of Fig. 8). Valence increases also due to a mixture of more positive words such as ‘hope’ and ‘win’ as well as a decrease in the appearances of ‘pain,’ ‘sad,’ and ‘guilty.’
Our age dependent estimates of valence comport with and extend previous observations of blogs that suggested an increase in valence over the age range 10–30 (Schler et al. 2006). Our results are however at odds with those of studies based on self-reports which largely find little or no change in valence over life times (Easterlin 2001, 2003). These latter results have been considered surprising as a rise and fall in valence—precisely what we find here—would be expected due to changes in income (rising) and health (eventually declining) (Easterlin 2001). Our results do not preclude that self-perception of happiness may indeed be stable, but since our results are based on measured behavior, they strongly suggest individuals do present differently throughout their lifespan. And while we have no data regarding income, because income typically rises with age, our results are sympathetic to recent work that finds happiness increases with income (Stevenson and Wolfers 2008), going against the well known Easterlin Paradox, popularized as the notion that ‘money does not buy happiness’ (Easterlin 1974).
Figure 9b shows that the average valence of blog sentences gently rolls over as a function of absolute latitude (i.e., combining both the Northern and Southern Hemispheres). Average valence ranges from 5.71 (for 0–11.5°) up to 5.83 (for 29.5–44.5°) and then back to 5.78 (for 52.5–69.5°). Seasonal Affective Disorder (Rosenthal et al. 1984) may be the factor behind the small drop for higher latitudes, though a different mechanism would need to be invoked to account for lower valence near the equator. One possible explanation could be that the relatively higher population of the mid-latitudes leads to stronger social structures (Layard 2005). We find some support for the social argument for individuals near the equator (absolute latitude ≤11.5), who we observe more frequently use the words ‘sad’, ‘bored’, ‘lonely’, ‘stupid’ and ‘guilty’ and avoid using ‘good’ and ‘people.’ On the other hand, the valence drop at higher latitudes (between 52.5 and 69.5° absolute latitude) is reflected in the frequency changes of a mixture of social, psychological, and some conditions-related words: ‘sick’, ‘guilty’, ‘cold’, ‘depressed’, and ‘headache’ all increase, ‘love’ and ‘life’ decrease, offset by less ‘hurt’ and ‘pain’ and more ‘bed’ and ‘sleep.’
At a much more subtle level, a weekly cycle in valence is visible in blog sentences (Fig. 9c). A relatively sharp peak in valence occurs on Sunday, after which valence steadily drops daily to its lowest point on Wednesday before climbing back up. Monday, contrary to commonly held perceptions but consistent with previous studies (Stone et al. 1984), exhibits the highest average valence after Sunday, perhaps indicating a lag effect.
We also observe some variation among countries. Of the four countries with at least 1% representation, the United States has the highest average valence (5.83) followed by Canada (5.78), the United Kingdom (5.77), and Australia (5.74).
In terms of gender, males exhibit essentially the same average valence as females (5.89 vs. 5.91). Females however show a larger variance than males (4.75 vs. 4.44) in agreement with past research (Snyder and Lopez 2009). We further find females disproportionately use the most impactful high and low valence words separating the two genders: ‘love’, ‘baby’, ‘loved’ and ‘happy’ on the positive end, and ‘hurt’, ‘hate’, ‘sad’, and ‘alone’ on the negative end. In fact, of the top 15 words contributing to δ(female, male), the only one used more frequently by males is the rather perfunctory word ‘good.’
The presidents with the highest average valence scores are Kennedy (6.41), Eisenhower (6.38), and Reagan (6.38), all of whose speeches are tightly clustered around their means. Eisenhower and Kennedy reach a high point after a period of relatively low valence starting with the First World War through and beyond which Wilson’s speeches steeply drop from an initial 6.58 in 1913 to 5.88 in 1920. The mean valence of Coolidge’s addresses provide the single exception during this time. Coolidge’s successor Hoover’s low average is largely due to his speech in 1930, the first one given after the stock market crash of October 29, 1929—Black Tuesday—which marked the beginning of the Great Depression; his speeches are burdened with ‘depression’, ‘debt’, ‘crisis’, and ‘failure.’ While Franklin Roosevelt’s overall average valence is low, the first eight of his four term stay in office range from 6.06 to 6.34. His last four speeches, coming during the Second World War (1942–1945), are sharply lower in valence, ranging from 5.48 to 5.60; ‘war’ naturally dominates these later speeches and along with ‘fight’ and ‘destroy’, overwhelm the positives of ‘peace’ and ‘victory.’
The large-scale pattern of the 19th Century shows two periods of relatively high valence, 1820–1840 and 1880–1890. The years before and during the American Civil War form a local minimum in valence corresponding to Buchanan and Lincoln.
The recent era shows a drop from Eisenhower and Kennedy’s level to that of Johnson (6.08), the latter’s first SOTU speech coming just seven weeks after the assassination of Kennedy, and the remainder through the heightening Vietnam War. Valence rises through the 1970s to reach the high of Reagan in the 1980s, from which it has since declined.
4 Concluding Remarks
Undoubtedly, the online recording of social interactions and personal experiences will continue to grow, providing ever richer data sets and the consequent opportunity and need for a wide range of scientific investigations. A natural extension of our work here would be to examine the dynamics of emotions in online interactive contexts, particularly in the realm of contagion (Hatfield et al. 1993, Fowler and Christakis 2008). If emotional contagion is observable, we would then be in a position to characterize its nature on the spectrum from analogous to an infectious disease (Murray 2002) to the more complex threshold-based contagion (Granovetter 1978; Dodds and Watts 2004). Our technique could also be useful in testing predictive theories of social interactions such as Heise’s affect control theory (Heise 1979) and Burke’s identity control theory (Stets and Tsushima 2001).
While we have been able to make and support a range of observations with our method for measuring the emotional content of large-scale texts, our approach can be improved in a number of ways. A first step would be to perform experiments and surveys to gather emotional content estimates for a more extensive set of individual words. The instrumental lens can also be made more sophisticated by coupling word assessments with detailed demographics of participants. Other approaches not necessarily based on semantic differentials in the manner of the ANEW study could also be naturally explored. Game-based experiments could also be used to assess the emotional content of common word groups and phrases (von Ahn 2006), allowing us to better characterize the micro–macro connection between the atoms of words and sentences, and differences in interpretations among various age groups and cultures.
The authors are grateful to Jonathan Harris and Sep Kamvar, the creators of http://www.wefeelfine.org; for helpful discussions with John Tucker, Lilian Lee, Andrew G. Reece, Josh Bongard, Mary Lou Zeeman, and Elizabeth Pinel; and for the suggestions of three anonymous reviewers.
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.