Keywords

Between the observation of fluid situations and their analysis there are several steps in which data is produced, sorted, assessed, discussed, and interpreted. An important question for design research is how this data is evaluated, whereby there is no consensus with regard to what constitutes analysis and whether doing it makes sense at all. Occasionally, design research projects conduct qualitative research and then carry out quantitative analysis (Hahn and Zimmermann 2010, p. 271). The resulting tables and pie charts may create the appearance of scientific objectivity, but it is precisely such approaches that lack an explorative engagement with the data.

Gaver et al. categorically reject analysis of Cultural Probes (1999, p. 27). This rejection of analysis is grounded in a positivistic conception of research according to which handing data in a rational way destroys its inspiring qualities. The meaning of analysis, however, is “to unbind”; it attempts to pry apart the totality of the data to find within it new combinations of meaning that are not visible at first glance. This is an interpretive and creative endeavor.

When performing analysis in the context of design research, it is useful with visual data in particular to lay these out physically in space. Photos, sketches, and illustrations—and possibly sequences of text as well—are printed out and arranged on walls and tables. Having a wide range of data facilitates cross-comparison. That is exactly what analysis is about—the search for semantic relationships, that is, similarities and differences within and outside the data. The goal consists not in prematurely classifying the data according to traditional common-sense understanding, but rather to search for new configurations of meaning within it. An analysis is an inspiring, explorative, and playful engagement with the data that is enmeshed in the iterative loops of the design process.

6.1 Transcriptions

The basis for analysis is usually text. Situations are fluid and singular and impossible to analyze because at the time of the analysis they are in the past. Every description of them is thus a linguistic reconstruction that consequently produces texts. Visual secondary data—that is photos and film—lend themselves to analysis but in the process, as soon as an interpretation is articulated, produce language as well. Ethnography can create a variety of textual forms:

  • Transcriptions of interviews, conversations, and photo-elicitations

  • Field notes, observation protocols, memos, and research logs

  • Transcriptions of group discussions and focus groups

  • Online data sets: communication strings from internet forums and chats, texts from websites, images, films

  • Documents such as newspaper articles, fliers, print advertisements, annual reports, magazines

  • Essays, letters, and texts from Cultural Probes and brainwriting

  • Descriptions of images, film, and artifacts

Social scientific analysis takes as its basis a variety of texts, which can even consist of key words or transcribed speech sequences uttered during a photo-elicitation session or in a focus group in response to a certain artifact. Depending on the level of detail, transcription can be extremely time-intensive. Transcription software such as f4 or the Windows app Record and Transcribe reduces the amount of this effort, but producing one’s own transcription can have the advantage of providing a more intensive initial engagement with the data.

The number of sequences one transcribes—whether an entire interview or only the central utterances—depends on the research questions and the context. Equally dependent on the context is the manner of the transcription. As a rule, a transcription in standard orthography—that is ordinary language—is sufficient. On the other hand, literary transcription, eye dialect, and phonetic transcription set down not only the content but also the prosodic features, colorations of dialect, incomprehensible expressions, and para-linguistic elements such as gestures, facial expressions, laughter, and physical movements (Kowal and O’Conell 2008, p. 441). If, for instance, the study is investigating face-to-face communication in a doctor-patient setting, then it may be important to make note of temporal overlaps in the progression of speech. It makes little sense, however, to expend enormous amounts of time writing down every clearing of the throat and every pause for thought if there is no intention to interpret these things.

Design research makes use of Grounded Theory (Brandes et al. 2009, p. 175 f.; Findeli 2004, p. 45) and ethnographic semantics (Cranz 2016, p. 6 ff.) for data analysis. Both methods correspond, more or less, to the established practice in design research of approaching data in an exploratory and open way and looking for overarching themes in it. Both methods reduce events to language (Tusting 2019), which is sometimes criticized in the context of design ethnography (Crabtree et al. 2009, p. 882).Footnote 1 As already noted, however, analysis is fundamentally based on language. This is evident in the interpretation of images. We can look at a picture and let it affect us, but as soon as we begin to interpret it and speak with someone about it, we enter the world of language, for “descriptions are always linguistic” (Eberle 2017, p. 37). And it is only when these thoughts are articulated and written down that they transcend time and space. Only then do they become intersubjectively communicable. Only then is research possible at all.

6.2 Grounded Theory

Grounded Theory was developed in the 1960s by Anselm Strauss and Barney Glaser (Bryant and Charmaz 2007; Charmaz 2014; Glaser and Strauss 1995; Holton 2018). With its interactionist, micro-sociological orientation, it presented a theoretical alternative to the structural functionalism dominant at the time. Grounded theory is not a mechanistic procedure that leads to unified, much less to objective, findings independently of the researcher; rather, the researcher actively produces data—at least in interactionist Grounded Theory.Footnote 2 Precisely because it leaves the researcher a great deal of freedom, Grounded Theory is a very demanding method. It does not “lead” one in a linear way through the process, but rather repeatedly makes one aware of how contingent the data is. In this regard, the method can make one uncertain. The position of Grounded Theory “is not logical; it is phenomenological” (Glaser and Strauss 1995, p. 6). Charmaz and Mitchell characterise the procedure of Grounded Theory as follows (2009, p. 160):

  1. 1.

    simultaneous data-collection and analysis;

  2. 2.

    pursuit of emergent themes through early data analysis;

  3. 3.

    discovery of basic social processes within the data;

  4. 4.

    inductive construction of abstract categories that explain and synthesize these processes;Footnote 3

  5. 5.

    integration of categories into a theoretical framework that specifies causes, conditions, and consequences of these processes

The key features of Grounded Theory consist in iteration between data collection and analysis, discovery of clues in the data, ordering of the clues into categories, and the contextualization that results from this. An important element of Grounded Theory is the writing of memos (Charmaz and Mitchell 2009, p. 167 f.). The point here is to formulate questions and write down thoughts regarding the data, which should create a reflexive distance. These thoughts may be associative, interpretive, or speculative (Lempert 2007, p. 247). Memos are like Post-It notes. They accompany the entirety of the research process. They articulate spontaneous thoughts and associations. They are subjective without any claim to objectivity. “Memos are preliminary, partial and correctable” (Charmaz and Mitchell 2009, p. 167).

Analysis using Grounded Theory begins with open coding (Holton 2007). Open coding involves interrogating the data with questions such as “‘What is the data a study of?’, ‘What categories does this indicate?’, ‘What is actually happening in the data?’, ‘What is the main concern being faced by the participants?’ and ‘What accounts for the continual resolving of this concern?’” (Glaser 1998, p. 140). Concretely, the process consists in marking the significant terms in the texts, whereby the selection criteria are of a semantic rather than a syntactic kind. The coding is applied to meanings, not to formal units such as words or sentences. Although it may be significant when a term is used frequently, the analysis is not about quantification. The marked codes are used as a basis for building categories. Categories are the first superordinate themes that subsume similar codes. After this, the coding proceeds further in iterative steps with increasingly greater selectivity. This multi-step coding is defined as “theoretical sampling” (Glaser and Strauss 1995, p. 45 ff.). In that regard the process of Grounded Theory resembles the funneling principle: it begins as completely open and closes in increasingly around the found concepts until theories and hypotheses grounded in the data are developed.

6.3 Ethnosemantic Analysis

A similar alternative process is ethnographic semantics or ethnosematic analysis (Spradley 1979, 1980; Maeder 1995). Here too, one begins with text, although the original text sequences are described as “native terms” (Spradley 1979, p. 73), in stark contrast to the “observer terms” of the ethnographer. In principle, Spradley posits: “Analysis is a search for patterns” (1980, p. 85). Ethnosemantic analysis is based on the assumption that cultural meanings are produced symbolically through language and the task consist in decoding this (Spradley 1979, p. 99).

The analysis is conducted on the basis of native terms (Spradley 1979, p. 73) that are specific to a particular social lifeworld. Similar terms are used to develop domains (Spradley 1979, p. 107 ff., 1980, p. 85 ff.). In a further step, taxonomic analysis (Spradley 1979, p. 132 ff., 1980, p. 112 ff.) is performed. Spradley explains this using the example of magazines: the domain “magazine” encompasses categories such as “comics,” “women’s magazines,” or “news magazines,” which are in turn broken down into “Time,” “Newsweek,” and “U.S. News & World Report” (1980, p. 112 ff.). Semantic connections are also investigated. Spradley mentions nine types (1979, p. 111):

  1. 1.

    Strict Inclusion; X is a kind of Y

  2. 2.

    Spatial; X is a place in Y, X is a part of Y

  3. 3.

    Cause-effect; X is a result of Y, X is a cause for Y

  4. 4.

    Rationale; X is a reason for Y

  5. 5.

    Location for action; X is a place for doing Y

  6. 6.

    Function; X is used for Y

  7. 7.

    Means-end; X is a way to do Y

  8. 8.

    Sequence; X is a step (stage) in Y

  9. 9.

    Attribution; X is an attribute (characteristic) of Y

In a further analytical step—known as Componential Analysis—individual domains are analyzed in greater depth (Spradley 1979, p. 173 ff., 1980, p. 131 ff.). Spradley explains this using the example of his daily mail, which consists of personal letters, bills, books, advertisements, etc. These categories all have their own particularities and—even before the mail is opened—they lead to certain practices and emotions. A circular will not get the same kind of attention as a personal letter with a hand-written address. A bill arouses displeasure. Which specific domains, and how many, one analyzes in depth is—as Spradley points out—a matter of discretion and depends on whether a study goes into depth or breadth (1980, p. 134). Finally, cultural themes are derived from the semantic relations, taxonomies, and componential analyses. These themes are “any principle recurrent in a number of domains, tacit or explicit, and serving as relationship among subsystems of cultural meaning” (Spradley 1979, p. 185 ff., 1980, p. 141).

6.4 Structured and Narrative Interviews

Expert and structured interviews are usually analyzed through content analysis. In this process, the narrative structure is broken up and the sequences are organized in accordance with the themes of the questions. Liebold and Trinczek describe this as a “top-down” logic, since the categories are already known in advance rather than generated from the data itself, as they are in Grounded Theory (2009, p. 73). This produces individual texts pertaining to different themes, the basis for which could be one or more structured or expert interviews. If there are several interviews, then commonly held and divergent positions will be compared within the individual thematic blocks.

The situation is different when theories have to be developed out of expert interviews. In that case the interview will be used to try to “tap into the interpretative knowledge of the expert—that is, to identify the principles, values, and rules that significantly shape the expert’s interpretation” (Liebold and Trinczek 2009, p. 76). The coding methods of Grounded Theory are appropriate for this this “bottom-up” strategy (Liebold and Trinczek 2009, p. 76 ff.). Which variant one chooses depends not least on whether the interview was structured or open. With structured interviews, content analysis would be advantageous, while Grounded Theory would be best for open ones. Fritz Schütze proposes a six-stage analytical process for narrative interviews (1983, p. 285), in which (1) a distinction is first drawn in the text between action and interpretation. (2) The textual sequences are then described structurally. (3) The individual structural elements are analyzed and summed up in theses. (4) The behavioral patterns of the narrator, as derived from the narrative, are compared with their own ideas about their actions. (5) Then several cases are compared and (6) a theoretical model is constructed.

6.5 Computer-Based Analysis

Qualitative data can be analyzed with the help of computers using CAQDAS (Computer-Assisted Qualitative Data Analysis Software). The available programs include MAXDA, AQUAD, ATLAS.ti, MAXQDA, NVIVO, and THE ETHNOGRAPH. The coding, interpretation, and analysis is however still performed by people, while the computer programs lend support (Kuckartz 2010, p. 13). Software serves, among other things, to increase the speed of the analysis. Furthermore, codes, categories, subcategories, memos, etc. can be connected with hyperlinks, which allows for quick creation and revision of classifications. For instance, all the In-vivo codes in a particular category or all the memos on a certain topic can be activated with one click. This enables a playful approach to the data, which can continually be rearranged in new configurations.

6.6 Visual Data

First, it should be noted that while an image is an isolated entity (and can be analyzed as such), it is always produced under particular circumstances. The existence of an image is predicated on conditions that in most cases are invisible. It is produced by someone with a specific intention, possibly filtered and edited, and perhaps made available to a public audience (Baur and Budenz 2017, p. 73 ff.). An important question in the analysis of visual data is whether it was produced by participants in the field or by the researcher. Photos and video produced by the researcher serve primarily as documentation. This is not the case for photos and videos that were produced, filtered, and edited independently of the research project within the field—for instance, party photos, selfies, Facebook profile pictures, Instagram accounts, etc. Here, it is precisely the aesthetic or the technical processing of the images that provides information about the values specific to the lifeworld, especially since these images and videos have less the function of documentation and more the quality of staging (Kirchner and Betz 2015, p. 182). In the analysis of any kind of visual material produced by the participants, there are various aspects to consider: What sort of visual material is it? What does it show? What are the conditions of its production and technical parameters? What comprises its staged qualities? What are its components? What colors dominate? What symbols are visible? What is the narrative structure or story that the image suggests? What associations does it evoke? How and where is it used? Tuma et al. posit with regard to the analysis of film—which of course applies for photos as well—that “interpretation is always a process of understanding” (Tuma et al. 2013, p. 17).

Babette Kirchner and Gregor Betz propose a hermeneutics of images rooted in the sociology of knowledge that orients itself on “natural” visual data—that is, “data considered relevant ‘by the field’” (2015, p. 179), which might include photos, flyers, posters, etc. Here too the object of investigation is not just what is pictured, but rather the context in which it was produced and made public (2015, p. 184 ff.). A similar approach is advocated by Tuma et al. with interpretative videoanalysis (2013). There are two essential characteristics of video material that should be noted (Tuma et al. 2013, p. 33 ff.). First, its permanence—both in the sense that video is a permanent data recording technique in which the sequential character is chronologically preserved and in the sense that data remains permanently available. Second, its density—which is to say, that detailed data are produced both at the visual and auditory level. Individual sequences can also be examined and analyzed through repetition and zooming.

Data can be very heterogenous and can consist of transcribed interview sequences, observation protocols, sketches, screenshots, Facebook comments, photos, and video. Text and image data are then intertwined. Sarah Pink suggests creating linkages within the data rather than analyzing visual data separately from texts.

[…] it involves making meaningful links between different research experiences and materials such as photography, video, field diaries, more formal ethnographic writing, participant produced or other relevant written or visual texts and other objects. These different media represent different types of knowledge and ways of knowing that may be understood in relation to one another. (Pink 2013, p. 144)

Ruth Holliday used such a process to analyze the video diaries of people in the queer scene: “Once the diaries were completed, I viewed and coded them to identify points of similarity and difference as well as recurrent themes” (2007, p. 258).

6.7 Things and Material Culture

An analysis of things is always an interpretative process—regardless of whether the things are physically present or merely depicted in photographs. In that context, Lueger and Froschauer propose four areas of artifact analysis (2018, p. 65 ff.):

  • Conditions for the artifact’s existence: How does it come about that the artifact exists at all? What historical developments are associated with the artifact? In what contexts in the social world is the artifact typically found?

  • Descriptive analysis: What is the artifact made of? What material properties are particularly significant for the artifact? What components does it consist of? What characterizes the individual parts? What is the significance of the physical, social, and temporal contexts of the artifact?

  • Everyday contextual embedding of meaning: What are the concepts and meanings with which the artifact is linked? Is the artifact associated with emotional and sensory qualities? What groups of actors have dealings with the artifact? How do the possible meanings ascribed to the artifact by the various groups of actors differ? To what extent is the artifact a part of everyday normality or of an extraordinary phenomenon? What are the situations in which the artifact appears?

  • Distanced structural analysis: What are the necessary preconditions for the production of the artifact? Who has which interests connected with the production? How is the artifact produced? What are the contexts of use in which the artifact exists? How do people handle the artifact? To what extent does the way it is handled change? And to what extent does the artifact itself change? What effects are ascribed to it? What functions does it take on? To what extent does the artifact structure social settings?

These interpretive analyses should be carried out in interdisciplinary teams wherever possible. The process is particularly relevant for product design because things, function, meaning, and context are delved into in all their complex interactions. For this type of artifact analysis, Lueger suggests Grounded Theory, in which iterative loops are employed to look for structural similarities in order to verify what has been ascertained previously (2000, p. 163).

Even if explorative methods such as Grounded Theory and ethnosemantic analysis are not theoretically robust in their initial phases, the categories that result from them become associated with theories. This theoretical connection serves to reflect upon preconceived points of view and to make the findings relatable in an intersubjective frame of reference. It is based on the epistemological consideration that “neutral” observation is not possible because it is always situated and based on theoretical assumptions.