International Conference on Interactive Digital Storytelling

Interactive Storytelling pp 179-185 | Cite as

Narrative Review Process: Getting Useful Feedback on Your Story

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9445)

Abstract

Getting useful feedback on the narrative of a game can be notoriously difficult: by the time a playtester can experience the story in your game, it’s usually already too late to make any significant changes. With this in mind, we developed a method which allows Ubisoft to apply a simple research methodology on a game’s story, early enough for it to be valuable for the writers working on it. This paper defines common difficulties of properly testing narrative in games, and then explains the methodology developed and explains how it was successfully used on Ubisoft productions to generate actionable feedback.

Keywords

User research Narrative Review User testing Video games 

1 Problem Definition

Narrative currently plays an extraordinarily important role in contemporary video games, as does user research. However, there is a disproportionately low amount of effective narrative research conducted in video games. One of the key problems leading to this issue is that narrative tends to be fully integrated quite late in the production of games. This is because narrative requires considerable work to be incorporated. You need finished scripts, voice over recordings, motion capture, etc. Once those milestones are reached, the game is nearing completion and the value of user feedback is informative at best, but certainly not actionable.

This creates several problems for game directors and narrative designers who must attempt to seamlessly merge a written story and game design into a cohesive whole. The research results from players experiencing the nearly finished game will be so late in development that they are unlikely to have time to act on the feedback to improve the quality of the story in any meaningful way. Changes at this point are impractical and expensive to implement, if not impossible, as games must adhere to strict release schedules, often planned years in advance.

Games user research has become increasingly common in game development as designers have learned the value of data-informed decision making and iteration. It is common for level and game designers to receive thirty rounds of scientifically collected feedback on their designs. Writers, on the other hand, tend to have a single table read before they send the script to production. EEDAR [1] recently presented data stating that nearly half of all text written in game reviews relates to story. Given the relative importance of narrative to the overall perception of quality of a game, it is only logical that research should support it with as much attention as game mechanics, level design, or overall aesthetics.

The average user is often ill-equipped to deliver interesting feedback at a critical level on storytelling and narrative. Their comments relating to story tend to be fairly vague, unclear, and lacking focus. Users are good at providing a broad appreciation, but have difficulty articulating complex narrative feedback as they are game players not literary critics. Unfortunately, this level of information is often insufficient for writers to make specific changes required to improve their stories.

Another issue that many writers face on a daily basis are frequent changes and unsolicited feedback from team members. This information, while potentially valuable, is unorganized, sporadic and generally difficult to parse. Writers must often using gut feeling and misguided orders from superiors to make changes to the narrative that are difficult to justify. They are unable to benefit from the data-informed decision making that their colleagues in other roles have the opportunity to use.

Additionally, narrative tends to be considered low priority by those outside the narrative team. Designers do not always consider the context of the story while working on their individual slice of the game. This can lead to a disconnect between gameplay actions and narrative. This problem is referred to as ludonarrative dissonance and has been described by many authors from a narrative and game design perspective. Numerous scholars claim that storytelling should be linear and created by writers, whereas an interactive experience such as video games depends entirely on the user’s choice and motives [2]. Jenkins tries to unite the two camps by arguing that even though a game can exist without a narrative, a narrative can define game mechanics and rules [3].

These problems inspired the research teams of Ubisoft Montreal and Ubisoft Massive to create a research methodology that would allow them to gather actionable and useful feedback on game narrative earlier in the development process.

2 Developing the Method

The methodology is the result of several iterations, after which various takeaways were integrated to improve the overall process. The process was originally created as a practical response to the problems previously identified. A game in conception required structured clear feedback on the narrative before development could continue, and so representatives from the writing community and the research group attempted to work out a solution that drew from the best practices of each field of expertise. The original process created below is the result of that collaboration, that was further refined through iteration on other projects. As a practical tool for game developers, keeping the process lightweight, actionable, cost-effective and relatively quick were key considerations.

The basic structure of the narrative review consists of these steps:
  • The review group reads and provides notes on a treatment, outline or script

  • The feedback is synthesized into a report covering all of the notes provided by the reviewers

  • A roundtable exercise is held that is part discussion and part brainstorm

2.1 Acquiring the Treatments and Recruiting the Participants

A treatment should be around a dozen pages in length. This should provide a thorough overview of the main storyline, challenges of the protagonist, and high level motivations of the player, without going into unnecessary detail or dialogue. This format tends to give an impression of the characterization and plot without burdening the review group with a full 300 page script to review, which would be unreasonable. This type of document also tends to be created very early in a project, as a guideline for creating the final script. Therefore, any recommendations for changes at this point are more likely to be integrated as they have a lower overall impact than later on in production. Another useful option is to include additional assets with the script to help the review group contextualize the text, such as concept or character art. This helps give a fuller experience and more closely resembles the audiovisual experience of playing a game.

Between six and twelve participants appears to be the optimal number for this exercise. Two types of reviewers are recruited: writers and game designers. The writers who are recruited should not be working on the game being studied. The second group is comprised of stakeholders of the narrative from the design team who are not directly implicated in the writing of the script. These include creative directors, game designers, level designers and other leads. This provides two very important and different points of view. Writers can give highly specific craft-oriented feedback that focuses on characterization, pacing, writing technique, etc. While game designers can give very contextualized feedback from a production perspective focusing on the stories’ impact on the game, and its place within the broader scope of the entire production.

2.2 Reader-Friendly Process: Which Questions Should We Ask?

Participants are given 10 days to read the script and take notes. Every section of the script is linked to a short questionnaire with a box for taking notes, a short battery of rating questions followed by a small open ended justification section, and finally an area giving the reader the option to ask questions directly to the writer. At the end of the treatment there is a longer questionnaire covering more general topics. At Ubisoft, we use a digital survey solution (Fig. 1) that allows us to collect the results and quickly compile the qualitative data and tabulate the quantitative.
Fig. 1.

Digital survey tool

For the script notes portion of the questionnaire we attempt to mimic and accommodate a natural script note methodology used by many writers. They can take their notes in the margin, or use their text editor markup tool, and then transfer the notes to the questionnaire using a simple notation that specifies page, paragraph and line.

The next section is the battery. Questions are on a 5-point Likert scale, with a scale from “agree” to “disagree”. The questions were formulated in collaboration with a writing team in order to understand the most important elements of a story:
  • Appreciation: “I enjoyed this section of the script.”

  • Comprehension: “I understood what was happening in this section of the script.”

  • Interest: “I want to know what happens next in the story.”

  • Character motivation: “I understand the protagonists motivations in this part of the story”

  • Character progression: “I understand what the protagonist must do next.”

These questions provide the fundamental answers required to understand how the readers are engaging with a narrative. Asking these identical questions on each section of the script allow us to easily track the key performance indicators of the story and identify weak areas. The questions themselves were developed with the writers to provide information on the elements that they considered to be of key importance for enjoyment of a narrative work. Players must enjoy the story, understand what is happening, remain interested, and understand and engage with the motivations and future actions of the protagonist.

The portion of the questionnaire that allows the reader to ask questions to the writer is valuable as it allows the group to question the intention of the writer and ask broader thought provoking questions, as opposed to the more formal and specific notes.

The final questionnaire is developed closely with the writing team in order to ensure that all important elements of the narrative structure are covered. The test objectives must reflect the current uncertainties of the writer, the areas in which they would like to receive constructive feedback, and the portions which they have already identified as problematic. This questionnaire covers such topics as:
  • Overall likes and dislikes of the script

  • Most memorable moments

  • Elements that require more attention from the writing team

  • Favorite and least favorite characters. Character related questions can also be included in each section depending how important is the feedback for the writing team.

  • Resolution (ending) satisfaction

  • Other specific questions based on the writer’s key questions and concerns.

2.3 Analysis and Synthesis

Once all of the results have been collected, the researcher begins to synthesize and analyze the data into a preliminary report. The researcher may apply their preferred text coding method, such as mind mapping, grounded theory [4] or any other applicable process. There are several possible ways of coding in content analysis that have been widely described in the literature. For example Matthes and Kohring [5] who used this method for psychological narrative analysis, describe a well-structured method that can be applied to the entire text as well as to a smaller unit such as a paragraph. They define the clusters beforehand based on their previous knowledge of the topic while we are defining the topics based on the insights that we get from the readers’ comments. However they also divide the data to negative and positive statements. Similar methods are used for film script analysis [6]. The authors mention its applicability to interactive storytelling systems such as video games. During the analysis it is important to look both for areas of consensus and areas of diverging opinions as both may provide valuable insight into the aggregate responses of the readers.

One problem that should be avoided is an overwhelmingly negative report. The note taking process tends to focus on areas that require improvement, thus the report become a long list of problem areas. Encourage the reading group to point out areas they feel are particularly good during the note taking process. It is important for the writer to understand which portions of the script are resonating, so that they are not lost in the iteration process. It is also important to remind the review group that the text is a work-in-progress and that it should not be perceived as a final document.

2.4 Round Table: Fruitful Discussion

Once the preliminary report has been delivered to the writer, the user researcher should develop a discussion guide with them. The round table should cover the most highly weighted criticisms, the most divisive comments and the most interesting questions to writer. It is not necessary to invite the entire review group, as that many participants would be detrimental to the smooth operation of a discussion. Rather, choose the reviewers that had the most interesting and insightful notes. Ideally, the group should consist of approximately 6 of the readers, the writer as an observer, the researcher to act ask group moderator, and a note taker.

The format of the round table can move freely between discussion group where participants share and expand on their opinions of the script, and brainstorming session where the group attempts to solve difficult narrative conundrums together. The discussion guide must be completely anonymous only speaking generally about topics that are important, so that none of the reviewers feel singled out or put on the spot.

It is important to schedule the roundtable early in the process in order to ensure that the necessary participants will be able to attend. Waiting too long between the reading session and the roundtable may cause participants to lose interest and forget important material. Additionally, if the process is too time consuming the narrative will possibly have shifted, making the study results obsolete and less valuable and meaningful to the writer.

The results of this discussion group are added to the report.

3 Conclusion

This methodology allows a user research team to deliver actionable feedback on a narrative to a writer prior to the point where it is fully integrated in the game. This allows writers to iterate on the issues present in their story in a data-informed manner with constructive useful feedback. The review document combining the questionnaire results and the round table give a solid foundation for the writers to make better decisions and to quell some of the uncertainty in their craft. The quality of the feedback received from this peer group exceeds what you could obtain from a group of average gamers, as it is contextualized, direct, clear and solution oriented.

It is important to note that the review does not force the writer to incorporate any of the changes, no one is advocating design by committee. It simply provides writers with the information required to willfully choose their path, and then documents the decisions made based on that info, allowing them to get directorial sign-off, and keeping everyone involved accountable.

The process gives various stakeholders of the project a voice in the creation of the narrative, leading to greater investment in the narrative and greater consideration from the team when creating their own portion of the game.

Lastly, the process is easily customizable depending on the focus, the available assets, the available resources, the stage of development, and the specifics of each individual game. The process described above should be considered a widely adjustable framework for evaluating story that can be repurposed for a variety of circumstances. It is simple to set up, relatively cost-effective to run, and provides valuable much needed feedback to the writing staff of a game development project.

4 Further Research

The current narrative review process relies very heavily on a reading experience, which is a passive consumption of a potentially non-interactive linear story. It may be difficult for some readers to accurately relate the reading experience to a gaming one which normally incorporates various other media such as graphics, music, and interactivity. Research can continue to develop methods to include more media to create a close facsimile of the experience of playing a game. This would allow us to get feedback that more closely resembles the real environment of playing a game. It also includes the sense of control over the character which is an essential part of the interactivity in the game but is hard to represent in written text [5]. The greater is the sense of control in an interactive story, the more immersive will be the experience. This method would require further study to ascertain its validity and relative value.

Another area for further development is a methodology to test non-linear stories. Games increasingly incorporate player choice, but currently the method does not account for texts with branching paths and multiple possible outcomes. This could be achieved my using a more modular script delivery system, and allowing participants to read in a manner similar to a “choose-your-own-adventure” style book, progressing the story in the manner of their choosing.

The last part of the methodology development would be to adjust it in order to use the process with regular players who represent the target of the game. This would allow us to verify the adjustments made based on the expert feedback and add an additional dimension to the feedback. The regular playtesters could be used to conduct research with a much narrower scope, and iterate on the more contentious areas of the previous expert review. This would allow for better continued support of the narrative portion of our games.

References

  1. 1.
    Bernbeck, S.: What drives a review score? Gameindustry.biz. http://www.gamesindustry.biz/articles/2015-02-09-what-drives-a-review-score
  2. 2.
    Adams, E.: Three problems for interactive storytellers. Gamasutra http://www.gamasutra.com/view/feature/131821/the_designers_notebook_three_.php
  3. 3.
    Jenkins, H.: Game design as narrative architecture. In: Wardrip-Fruin, N., Pat, H. (eds.) First Person: New Media as Story, Performance, and Game, pp. 118–130. MIT Press, Cambridge (2004)Google Scholar
  4. 4.
    Strauss, A., Corbin, J.: Grounded theory methodology. In: Denzin, N.K., Lincoln, Y.S. (eds.) Handbook of Qualitative Research, pp. 217–285. Sage Publications, Thousand Oaks (1994)Google Scholar
  5. 5.
    Matthes, J., Kohring, M.: The content analysis of media frames: Toward improving reliability and validity. J. Commun. 58(2), 258–279 (2008)CrossRefGoogle Scholar
  6. 6.
    Murtagh, F., Ganz, A., McKie, S.: The structure of narrative: the case of film scripts. Pattern Recogn. 42(2), 302–312 (2009)CrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Ubisoft MontrealMontrealCanada
  2. 2.Ubisoft MassiveMalmöSweden

Personalised recommendations