Keywords

In this section we elaborate on the methods we use to collect and analyse our data. We followed a multi-method data collection process in which we triangulated data obtained via different collection methods. The methods we use comprise document analysis, semi-structured interviews, participant observations, surveys, and workshops in which group discussions take place. Given our time constraints and travel restrictions, we did not apply all methods in an equal manner to all four cases, but we tailored them to each specific case. An overview of the methods used in our respective cases is depicted in Table 4.1. The case-specific collection methods can be reviewed in Appendix 3; the following text focuses on the general data collection, interpretation and analysis methodology.

Table 4.1 Methods used in our empirical cases

4.1 Data Collection Methods

We used document analysis as a starting point for getting an overview and for triangulating data from interviews and participant observations (Flick, 1991). Documents we analysed included reports, further background material shared with us, website material, newsletters and scientific publications regarding our case studies. Publications on modalities, knowledge interactions and impact assessments were also reviewed. Document analysis was the foundation for every case study we conducted and formed the basis of our additional data collection methods.

Further, we conducted semi-structured interviews with stakeholders relevant to our cases. The interviewees were internal and external to our partners. For the interviews, guidelines were developed in a two-step process. In the first step a general interview guide was developed, based on our sensitising concept. This guide can be found in Appendix 4. In a second step, we tailored the interview guidelines according to the particular interviewee, taking into account information we gained from previous interviews, and their individual background, expertise and position.

We conducted the semi-structured interviews in an episodic manner and encouraged narrative elements. By this approach, both questioning and narration are combined to inquire into the personal experiences and values, as well as expert knowledge in specific topics (Lamnek, 2010). By this, contradictions and multi-layered situations may become apparent and recognised (BenEzer & Zetter, 2015). The interviewees were selected through discussions with the focal points of our partners. We asked for staff and stakeholders relevant to our topics of interest who were willing to be interviewed.

In addition, we conducted participant observations because they allowed the inclusion of another angle on the activities our partners implement. This method of data collection potentially offers a holistic interpretation in regard to sensing what is not (or cannot easily be) put into words (DeWalt & DeWalt, 2010). The selection of the events we were to participate occurred jointly with our partners. In general, it was felt necessary to analyse both virtual and physical formats. In one case, recorded online events were used as a data source.

The observation process was conducted in a structured manner with the help of observation protocols, the guidelines for which can be found in Appendix 5. These guidelines were designed prior to our observations and resulted from a group discussion in which we agreed upon six relevant blocks for analysis, as shown in Table 4.2.

Table 4.2 Analytical framework for participatory observations

The considered questions in the protocol and the observation approach were developed and refined iteratively, in the sense that our approach was readjusted after additional input from observations. Every observation was done by at least two members of our team. This allowed for complementarity and increased the extensiveness of our observations, since different observers gain different understandings of what they see (Kawulich, 2005, p. 6). After every observatory participation, concluding impressions, thoughts and reflections were recorded as soon as possible. Special attention was given to assessments of the extent to which our presence as observers might have influenced a situation, whether actors might have changed—or refrained from—their usual behaviour.

In the case of RCI, questionnaires were shared with participants of past events conducted by the partner with the intention of assessing personal experiences of the modalities of knowledge interaction they participated in. This included perceptions of the interaction process itself, its implementation and its effects. The survey was designed based on our sensitising concept and both open-ended and closed questions were included. To decrease language barriers, we shared the survey in French and English. Both versions are attached in Appendix 6.

We had two objectives with this survey of RCI’s past participants. Firstly, we were trying to expand and diversify our view regarding the modalities, their realisation and effects. By predominantly interviewing staff working for our partners and analysing their self-published documents, only their self-perception regarding modalities of knowledge interaction is captured. With surveys sent to external partners, we can go beyond this internal view and gain a differentiated perspective. Secondly, the survey is used to meet our ethical aspirations regarding a collaborative research approach. As elaborated in our chapter on reflections and limitations (see Chapter 2), working collaboratively means overcoming mono-directional knowledge extraction (Burman, 2018, p. 56). Our focal person at RCI highlighted the benefits their organisation could gain from us surveying former participants. Based on this, the surveys were designed to assist in the partner’s strategic reflection on its modalities of knowledge interaction. To do so, we based our surveys on a mixed-methods approach, including quantitative as well as qualitative components (Ackerly & True, 2010).

And beyond our methods for collecting data, we also conducted validation workshops in order to share our insights with our partners. Within these workshops we conducted presentations and discussions regarding our findings, asked open questions and received valuable feedback.

4.2 Data Analysis and Interpretation of Results

The following paragraphs introduce the analysis and interpretation process of our data. We follow the idea of “investigator triangulation” (Flick, 2004) to limit the bias of “free interpretation” (Mayring & Fenzl, 2014, p. 546). This “collaborative approach” (Given, 2006, p. 58) also strengthens the validity and reliability through intra- and inter-coder congruence in our analysis and interpretation process (Mayring & Fenzl, 2014, pp. 546–547).

4.2.1 Qualitative Content Analysis: Survey, Interviews and Documents

The data gathered through surveys, interviews and documents was analysed by using a qualitative content analysis approach following Mayring (2015, pp. 70–90). Whereas the data obtained by questionnaires and documents was already available in written form and could be directly used for categorisation, all interviews had to be transcribed as a first step. This was done following content-semantic regulations (Dresing & Pehl, 2018). We conducted a full transcription of the interviews obtained in Rwanda. In the case of the interviews conducted in India, however, only key phrases and words could be transcribed due to time and resource constraints. In a second step, all data material, including interview transcripts, survey answers and selected documents, was summarised and abstracted via the formation of categories through coding (Mayring, 2015, p. 70). The formation of categories included deductive and inductive dimensions. Whereas for the deductive categories we used our sensitising concept as a starting point, the inductive dimensions had to be generated by the interview content itself. For this, we also considered the latent meaning of statements, exceeding “manifested surface content” (Mayring, 2015, p. 32). We did this due to the fact that wording and definitions may be used inconsistently within and across one or more interview(s). Finally, our coded categories were re-reviewed, taking into consideration the starting material (Mayring, 2015, p. 70) as well as the data collected through the other methods.

4.2.2 Validation of Observation Protocols from Participant Observations

The evaluation of the observation protocols was done by analysing observers’ protocol notes made during the observations with respect to our case studies. We used a comparative evaluation approach to provide qualitative assurance by comparing data from the protocols of different observers. We strove to capture a diversity of conditions and effects of different mechanisms (Przyborski & Wohlrab-Sahr, 2014, p. 127). Following Lüders (2004), this approach aims to systematically combine different data and results to arrive at a more “dense” description (Lüders, 2004, p. 400).

4.2.3 Interpretation of Results and Communicative Validation

We interpreted our results by relating the collected data to our research questions, underlying theories and sensitising concept. By ordering and structuring our data accordingly, we obtained interpretable results that are illustrated in the following section. It is important to highlight that the interpretation process did not, in practice, follow a linear approach. Instead, the process is more similar to a cascade-like iterative process in which our research process was repeatedly reconsidered and partially readjusted based on the results and the interpretations we made. This approach is in line with our ambitions to be exploratory and goes hand in hand with Reichertz’s understanding of scientific work as always and necessarily part of the creation of its social and societal context, since researchers always live in the practice they study and co-produce (Reichertz, 2014, p. 70). Crucially, this approach is also in line with our ambition to be exploratory.

After completing the interpretation process, a communicative validation was conducted with our partners, following the approach by Mayring (2018, p. 21). With RCI we conducted two validation workshopsone offline and one online. With RIS, GIZ Rwanda (DigiCenter), GIZ India (WASCA and IGEF) and USPC we conducted one validation workshop each. Beyond the partner-specific communicative validations, we also conducted a final presentation and discussion of results with all our partners jointly. This event offered another opportunity for our partners to interact with and react to each other’s contributions.