# The emotional arcs of stories are dominated by six basic shapes

- 19k Downloads
- 1 Citations

## Abstract

Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a ‘big data’ lens. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories and forming patterns that are meaningful to us. Here, by classifying the emotional arcs for a filtered subset of 1,327 stories from Project Gutenberg’s fiction collection, we find a set of six core emotional arcs which form the essential building blocks of complex emotional trajectories. We strengthen our findings by separately applying matrix decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads.

### Keywords

stories sentiment mining narratology language society## 1 Introduction

The power of stories to transfer information and define our own existence has been shown time and again [1, 2, 3, 4, 5]. We are fundamentally driven to find and tell stories, likened to *Pan Narrans* or *Homo Narrativus*. Stories are encoded in art, language, and even in the mathematics of physics: We use equations to represent both simple and complicated functions that describe our observations of the real world. In science, we formalize the ideas that best fit our experience with principles such as Occam’s Razor: The simplest story is the one we should trust. We tend to prefer stories that fit into the molds which are familiar, and reject narratives that do not align with our experience [6].

We seek to better understand stories that are captured and shared in written form, a medium that since inception has radically changed how information flows [7]. Without evolved cues from tone, facial expression, or body language, written stories are forced to capture the entire transfer of experience on a page. An often integral part of a written story is the emotional experience that is evoked in the reader. Here, we use a simple, robust sentiment analysis tool to extract the reader-perceived emotional content of written stories as they unfold on the page.

We objectively test aspects of the theories of folkloristics [8, 9], specifically the commonality of core stories within societal boundaries [4, 10]. A major component of folkloristics is the study of society and culture through literary analysis. This is sometimes referred to as *narratology*, which at its core is ‘a series of events, real or fictional, presented to the reader or the listener’ [11]. In our present treatment, we consider the plot as the ‘backbone’ of events that occur in a chronological sequence (more detail on previous theories of plot are in Appendix A in Additional file 1). While the plot captures the mechanics of a narrative and the structure encodes their delivery, in the present work we examine the emotional arc that is invoked through the words used. The emotional arc of a story does not give us direct information about the plot or the intended meaning of the story, but rather exists as part of the whole narrative (*e.g.*, an emotional arc showing a fall in sentiment throughout a story may arise from very different plot and structure combinations). This distinction between the emotional arc and the plot of a story is one point of misunderstanding in other work that has drawn criticism from the digital humanities community [12]. Through the identification of motifs [13], narrative theories [14] allow us to analyze, interpret, describe, and compare stories across cultures and regions of the world [15]. We show that automated extraction of emotional arcs is not only possibly, but can test previous theories and provide new insights with the potential to quantify unobserved trends as the field transitions from data-scarce to data-rich [16, 17].

*emotional arc*of a story on the ‘Beginning-End’ and ‘Ill Fortune-Great Fortune’ axes [18]. Vonnegut finds a remarkable similarity between Cinderella and the origin story of Christianity in the Old Testament (see Figure S1 in Appendix B in Additional file 1), leading us to search for all such groupings. In a recorded lecture available on YouTube [19], Vonnegut asserted:

‘There is no reason why the simple shapes of stories can’t be fed into computers, they are beautiful shapes.’

For our analysis, we apply three independent tools: matrix decomposition by singular value decomposition (SVD), supervised learning by agglomerative (hierarchical) clustering with Ward’s method, and unsupervised learning by a self-organizing map (SOM, a type of neural network). Each tool encompasses different strengths: the SVD finds the underlying basis of all of the emotional arcs, the clustering classifies the emotional arcs into distinct groups, and the SOM generates arcs from noise which are similar to those in our corpus using a stochastic process. It is only by considering the results of each tool in support of each other that we are able to confirm our findings.

We proceed as follows. We first introduce our methods in Section 2, we then discuss the combined results of each method in Section 3, and we present our conclusions in Section 4. A graphical outline of the methodology and results can be found as Figure S2 in Appendix B in Additional file 1.

## 2 Methods

### 2.1 Emotional arc construction

*Harry Potter and the Deathly Hallows*, the final book in the popular Harry Potter series by JK Rowling. While the plot of the book is nested and complicated, the emotional arc associated with each sub-narrative is clearly visible. We analyze the emotional arcs corresponding to complete books, and to limit the conflation of multiple core emotional arcs, we restrict our analysis to shorter books by selecting a maximum number of words when building our filter. Further details of the emotional arc construction can be found in Appendix C in Additional file 1.

### 2.2 Project Gutenberg corpus

For a suitable corpus we draw on the open access Project Gutenberg data set [25]. We apply rough filters to the collection (roughly 50,000 books) in an attempt to obtain a set of books that represent English works of fiction. We start by selecting for only English books, with total words between 20,000 and 100,000, with more than 40 downloads from the Project Gutenberg website, and with Library of Congress Class corresponding to English fiction.^{1} To ensure that the 40-download limit is not influencing the results here, we further test each method for 10, 20, 40, and 80 download thresholds, in each case confirming the 40 download findings to be qualitatively unchanged. Next, we remove books with any word in the title from a list of keywords (*e.g.*, ‘poems’ and ‘collection’, full list in Appendix C in Additional file 1). From within this set of books, we remove the front and back matter of each book using regular expression pattern matches that match on 98.9% of the books included. Two slices of the data for download count and the total word count are shown in Appendix C, Figure S4 in Additional file 1. We provide a list of the book ID’s which are included for download in the Online Appendices at http://compstorylab.org/share/papers/reagan2016b/, the books are listed in Table S1 in Appendix D in Additional file 1, and we attempt to provide the Project Gutenberg ID when we mention a book by title herein. Given the Project Gutenberg ID *n*, the raw ebook is available online from Project Gutenberg at http://www.gutenberg.org/ebooks/n/, *e.g.*, *Alice’s Adventures in Wonderland* by Lewis Carroll, has ID 11 and is available at http://www.gutenberg.org/ebooks/11/. We also provide an online, interactive version of the emotional arc for each book indexed by the ID, *e.g.*, *Alice’s Adventures in Wonderland* is available at http://hedonometer.org/books/v3/11/.

### 2.3 Principal component analysis (SVD)

*i*in the matrix

*A*, we apply the SVD to find

*U*contains the projection of each sentiment time series onto each of the right singular vectors (rows of \(V^{T}\), eigenvectors of \(A^{T}A\)), which have singular values given along the diagonal of Σ, with \(W = U \Sigma\). Different intuitive interpretations of the matrices

*U*, Σ, and \(V^{T}\) are useful in the various domains in which the SVD is applied; here, we focus on right singular vectors as an orthonormal basis for the sentiment time series in the rows of

*A*, which we will refer to as the

*modes*. We combine Σ and

*U*into the single coefficient matrix

*W*for clarity and convenience, such that

*W*now represents the mode coefficients.

### 2.4 Hierarchical clustering

*t*indexing the window in books \(b_{i}\), \(b_{j}\) to generate the distance matrix.

### 2.5 Self-organizing map (SOM)

*i*as

*k*in the set of nodes \(\mathcal{N}\), with distance function

*D*given above and total number of nodes

*N*. For results shown here we take \(\alpha= -0.15\). We implement the learning adaptation function at training iteration

*i*as \(f(i) = (i+1)^{\beta}\), again with \(\beta= -0.15\), a standard value for the training hyper-parameters.

## 3 Results

‘Rags to riches’ (rise).

‘Tragedy’, or ‘Riches to rags’ (fall).

‘Man in a hole’ (fall-rise).

‘Icarus’ (rise-fall).

‘Cinderella’ (rise-fall-rise).

‘Oedipus’ (fall-rise-fall).

### 3.1 Principal component analysis (SVD)

We emphasize that by definition of the SVD, the mode coefficients in *W* can be either positive and negative, such that the modes themselves explain variance with both the positive and negative version. In the right panels of each mode in Figure 3 we project the 1,327 stories onto each of first six modes and show the resulting coefficients. While none are far from 0 (as would be expected), mode 1 has a mean slightly above 0 and both modes 3 and 4 have means slightly below 0. To sort the books by their coefficient for each mode, we normalize the coefficients within each book in the rows of *W* to sum to 1, accounting for books with higher total energy, and these are the coefficients shown in the right panels of each mode in Figure 3. In Appendix E in Additional file 1, we provide supporting, intuitive details of the SVD method, as well as example emotional arc reconstruction using the modes (see Figures S5-S7 in Additional file 1). As expected, less than 10 modes are enough to reconstruct the emotional arc to a degree of accuracy visible to the eye.

*vice versa*. Mode 1, which encompasses both the ‘Rags to riches’ and ‘Tragedy’ emotional arcs, captures 30% of the variance of the entire space. We examine the closest stories to both sides of modes 1-3, and direct the reader to Figure S8 in Additional file 1 for more details on the higher order modes. The two stories that have the most support from the ‘Rags to riches’ mode are

*The Winter’s Tale*(1,539) and

*Oscar Wilde*,

*Art and Morality*:

*A Defence of*‘

*The Picture of Dorian Gray*’ (33,689). Among the most categorical tragedies we find

*Lady Susan*(946) and

*Warlord of Kor*(17,958). Number 8 in the sorted list of tragedies is perhaps the most famous tragedy:

*Romeo and Juliet*by William Shakespeare. Mode 2 is the ‘Man in a hole’ emotional arc, and we find the stories which most closely follow this path to be

*The Magic of Oz*(419) and

*Children of the Frost*(10,736). The negation of mode 2 most closely resembles the emotional arc of the ‘Icarus’ narrative. For this emotional arc, the most characteristic stories are

*Shadowings*(34,215) and

*Battle-Pieces and Aspects of the War*(12,384). Mode 3 is the ‘Cinderella’ emotional arc, and includes

*Mystery of the Hasty Arrow*(17,763) and

*Through the Magic Dorr*(5,317). The negation of Mode 3, which we refer to as ‘Oedipus’, is found most characteristically in

*This World is Taboo*(18,172),

*Old Indian Days*(339), and

*The Evil Guest*(10,377). We also note that the spread of the stories from their core mode increases strongly for the higher modes.

### 3.2 Hierarchical clustering

*e.g.*, considering each intra-cluster collection as a fully connected weighted network, we take the most central node), and in parenthesis the number of books in that cluster. In other words, we label each cluster by considering the network centrality of the fully connected cluster with edges weighted by the distance between stories. By cutting the dendrogram in Figure 5 at various linkage costs we are able to extract clusters of the desired granularity. For the cuts labeled C2, C4, and C8, we show these clusters in Figures S9, S11, and S15 in Additional file 1. We find the first four of our final six arcs appearing among the eight most different clusters (Figure S15 in Additional file 1).

The clustering method groups stories with a ‘Man in a hole’ emotional arc for a range of different variances, separate from the other arcs, in total these clusters (panels A, E, and I of Figure S16 in Additional file 1) account for 30% of the Gutenberg corpus. The remainder of the stories have emotional arcs that are clustered among the ‘Tragedy’ arc (32%), ‘Rags to riches’ arc (5%), and the ‘Oedipus’ arc (31%). A more detailed analysis of the results from hierarchical clustering can be found in Appendix F in Additional file 1, and this result generally agrees with other attempts that use only hierarchical clustering [12].

### 3.3 Self-organizing map (SOM)

Finally, we apply Kohonen’s self-organizing map (SOM) and find core arcs from unsupervised machine learning on the emotional arcs. On the two dimensional component plane, the prescribed network topology, we find seven spatially coherent groups, with five emotional arcs. These spatial groups are comprised of stories with core emotional arcs of differing variance.

### 3.4 Null comparison

There are many possible emotional arcs in the space that we consider. To demonstrate that these specific arcs are uniquely compelling as stories written by and for *homo narrativus*, we consider the true emotional arcs in relation to their most suitable comparison: the book with randomly shuffled words (‘word salad’) and the resulting text from a 2-gram Markov model trained on the individual book itself (‘nonsense’). We chose to compare to ‘word salad’ and ‘nonsense’ versions as they are more representative of a null model: written stories that are without coherent plot or structure to generate a coherent emotional arc, which is not true of a stochastic process (*e.g.*, a random walk model or noise). Examples of the emotional arc and null emotional arcs for a single book are shown in Figure S20 in Additional file 1, with 10 ‘word salad’ and ‘nonsense’ versions. Sampled text using each method is given in Appendix C in Additional file 1. We re-run each method on the English fiction Gutenberg Corpus with the null versions of each book and verify that the emotional arcs of real stories are not simply an artifact. The singular value spectrum from the SVD is flatter, with higher-frequency modes appearing more quickly, and in total representing 45% of the total variance present in real stories (see Figures S22 and S25 in Additional file 1). Hierarchical clustering generates less distinct clusters with considerably lower linkage cost (final linkage cost 1,400 vs 7,000) for the emotional arcs from nonsense books, and the winning node vectors on a self-organizing map lack coherent structure (see Figures S26 and S29 in Appendix H in Additional file 1).

### 3.5 The success of stories

## 4 Conclusion

Using three distinct methods, we have demonstrated that there is strong support for six core emotional arcs. Our methodology brings to bear a cross section of data science tools with a knowledge of the potential issues that each present. We have also shown that consideration of the emotional arc for a given story is important for the success of that story. Of course, downloads are only a rough proxy for success, and this work may provide an outline for more detailed analysis of the factors that impact meaningful measures of success, *i.e.*, sales or cultural influence.

Our approach could be applied in the opposite direction: namely by beginning with the emotional arc and aiding in the generation of compelling stories [30]. Understanding the emotional arcs of stories may be useful to aid in constructing arguments [31] and teaching common sense to artificial intelligence systems [32].

Extensions of our analysis that use a more curated selection of full-text fiction can answer more detailed questions about which stories are the most popular throughout time, and across regions [10]. Automatic extraction of character networks would allow a more detailed analysis of plot structure for the Project Gutenberg corpus used here [11, 33, 34]. Bridging the gap between the full text stories [35] and systems that analyze plot sequences will allow such systems to undertake studies of this scale [36]. Place could also be used to consider separate character networks through time, and to help build an analog to Randall Munroe’s Movie narrative charts [37].

We are producing data at an ever increasing rate, including rich sources of stories written to entertain and share knowledge, from books to television series to news. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role.

## Footnotes

- 1.
The specific classes have labels PN, PR, PS, and PZ.

## Notes

### Acknowledgements

PSD and CMD acknowledge support from NSF Big Data Grant #1447634.

## Supplementary material

### References

- 1.Pratchett T, Stewart I, Cohen J (2003) The science of Discworld II: the globe. Ebury Press, London Google Scholar
- 2.Campbell J (2008) The hero with a thousand faces, 3rd edn. New World Library, Novato Google Scholar
- 3.Gottschall J (2013) The storytelling animal: how stories make us human. Mariner Books, New York Google Scholar
- 4.Cave S (2013) The 4 stories we tell ourselves about death. http://www.ted.com/talks/stephen_cave_the_4_stories_we_tell_ourselves_about_death
- 5.Dodds PS (2013) Homo narrativus and the trouble with fame. Nautilus magazine. http://nautil.us/issue/5/fame/homo-narrativus-and-the-trouble-with-fame
- 6.Nickerson RS (1998) Confirmation bias: a ubiquitous phenomenon in many guises. Rev Gen Psychol 2:175-220 CrossRefGoogle Scholar
- 7.Gleick J (2011) The information: a history, a theory, a flood. Pantheon, New York Google Scholar
- 8.Propp V (1968) Morphology of the folktale (1928). University of Texas Press, Austin Google Scholar
- 9.MacDonald MR (1982) Storytellers sourcebook: a subject, title, and motif index to folklore collections for children. Gale Group, Farmington Hills Google Scholar
- 10.da Silva SG, Tehrani JJ (2016) Comparative phylogenetic analyses uncover the ancient roots of Indo-European folktales. R Soc Open Sci 3(1):150645. doi:10.1098/rsos.150645. http://rsos.royalsocietypublishing.org/content/3/1/150645.full.pdf CrossRefGoogle Scholar
- 11.Min S, Park J (2016) Narrative as a complex network: a study of Victor Hugo’s Les Misérables. In: Proceedings of HCI Korea Google Scholar
- 12.Jockers M (2014) A novel method for detecting plot. http://www.matthewjockers.net/2014/06/05/a-novel-method-for-detecting-plot/
- 13.Dundes A (1997) The motif-index and the tale type index: a critique. J Folklore Res 34:195-202 Google Scholar
- 14.Dolby SK (2008) Literary folkloristics and the personal narrative. Trickster Press, Bloomington Google Scholar
- 15.Uther H-J (2011) The types of international folktales. A classification and bibliography. Based on the system of Antti Aarne and Stith Thompson. Part I. Animal tales, tales of magic, religious tales, and realistic tales, with an introduction. FF communications, vol 284. Finnish Academy of Science and Letters, Helsinki Google Scholar
- 16.Kirschenbaum MG (2007) The remaking of reading: data mining and the digital humanities. In: The national science foundation symposium on next generation of data mining and cyber-enabled discovery for innovation, Maryland Google Scholar
- 17.Moretti F (2013) Distant reading. Verso, New York Google Scholar
- 18.Vonnegut K (1981) Palm sunday. RosettaBooks LLC, New York Google Scholar
- 19.Vonnegut K (1995) Shapes of stories. https://www.youtube.com/watch?v=oP3c1h8v2ZQ
- 20.Reagan A, Tivnan B, Williams JR, Danforth CM, Dodds PS (2015) Benchmarking sentiment analysis methods for large-scale texts: a case for using continuum-scored words and word shift graphs. arXiv:1512.00531
- 21.Ribeiro FN, Araújo M, Gonçalves P, Gonçalves MA, Benevenuto F (2016) SentiBench - a benchmark comparison of state-of-the-practice sentiment analysis methods. EPJ Data Sci 5(1):23. doi:10.1140/epjds/s13688-016-0085-1 CrossRefGoogle Scholar
- 22.Dodds PS, Harris KD, Kloumann IM, Bliss CA, Danforth CM (2011) Temporal patterns of happiness and information in a global social network: hedonometrics and Twitter PLoS ONE 6(12):e26752. doi:10.1371/journal.pone.0026752 CrossRefGoogle Scholar
- 23.Tenenbaum DJ, Barrett K, Medaris SV, Devitt T (2015) In 10 languages, happy words beat sad ones. http://whyfiles.org/2015/in-10-languages-happy-words-beat-sad-ones/
- 24.Booker C (2006) The seven basic plots: why we tell stories. Bloomsbury Academic, New York Google Scholar
- 25.Various (2010) Project Gutenberg. http://gutenberg.org
- 26.Ward JH Jr (1963) Hierarchical grouping to optimize an objective function. J Am Stat Assoc 58(301):236-244 MathSciNetCrossRefGoogle Scholar
- 27.Kohonen T (1990) The self-organizing map. Proc IEEE 78(9):1464-1480 CrossRefGoogle Scholar
- 28.Dodds PS, Clark EM, Desu S, Frank MR, Reagan AJ, Williams JR, Mitchell L, Harris KD, Kloumann IM, Bagrow JP, Megerdoomian K, McMahon MT, Tivnan BF, Danforth CM (2015) Human language reveals a universal positivity bias. Proc Natl Acad Sci USA 112(8):2389-2394 CrossRefGoogle Scholar
- 29.Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53-65 CrossRefMATHGoogle Scholar
- 30.Li B, Lee-Urban S, Johnston G, Riedl M (2013) Story generation with crowdsourced plot graphs. In: Proceedings of the twenty-seventh AAAI conference on artificial intelligence Google Scholar
- 31.Bex FJ, Bench-Capon TJ (2010) Persuasive stories for multi-agent argumentation. In: AAAI fall symposium: computational models of narrative, vol 10, p 4 Google Scholar
- 32.Riedl MO, Harrison B (2015) Using stories to teach human values to artificial agents Google Scholar
- 33.Bost X, Labatut V, Linarès G (2016) Narrative smoothing: dynamic conversational network for the analysis of TV series plots. arXiv:1602.07811
- 34.Prado SD, Dahmen SR, Bazzan ALC, Carron PM, Kenna R (2016) Temporal network analysis of literary texts. arXiv:1602.07275
- 35.Nenkova A, McKeown K (2012) A survey of text summarization techniques. In: Mining text data. Springer, Berlin, pp 43-76 CrossRefGoogle Scholar
- 36.Winston PH (2011) The strong story hypothesis and the directed perception hypothesis Google Scholar
- 37.Munroe R (2009) Movie narrative charts. http://xkcd.com/657/

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.