We carried out a mixed-methods study on software frameworks and collected both qualitative and quantitative data (Fig. 2). First, we interviewed nine experienced practitioners from two companies that develop applications using the Qt framework. Second, we conducted a longitudinal survey among students who represented less experienced developers and were selecting and starting to use Qt or other frameworks to develop an application.
The studied frameworks
The main framework in the focus of this research was QtFootnote 1. In addition, the student survey included data from other frameworks that the students used in their course assignment (Fig. 2): this enabled comparison between Qt and other frameworks.
We selected Qt as the main study subject because of its accessibility. Qt is also relatively popular especially in commercial embedded products and it has long history that has made it mature and quite large. For example, there exist a lot of different types of boundary resources and each type of boundary resource are significant in size.
Data collection: interviews
We conducted nine semi-structured interviews with practitioners using Qt (left side of Fig. 2). Hence, this part of the study focused on the views of experienced and already loyal developers who use Qt regularly.
Selecting participants The target population was experienced practitioners working in companies that use Qt in their application development.
The practitioners were selected using convenience sampling. We asked one Qt manager to name external application developers and projects. Based on his recommendation, we recruited participants from two Finnish companies. The contact persons from those companies were approached via e-mail to identify practitioners for interview.
The selected practitioners’ professional experience ranged from 10 to 25 years. They represented different kinds of stakeholders: three developers and architects, three consultants, and three managers (Table 1). Their experience in using Qt also varied. Five of them had used Qt for several yeas, up to 11 years. Four interviewees had used Qt less than one year. Six of the practitioners were from Company A, whereas three of the practitioners were from Company B. The practitioners were using Qt to build either mobile applications across several platforms (Android, iOS and Windows Phone) or embedded software that was delivered together with the hardware.
Data collection The interviews were conducted in a semi-structured style. A set of themes with predefined questions were used, but additional questions were asked for each theme. The themes were as follows: background information, selection of Qt, getting started with Qt, daily use of Qt, use of Qt platform boundary resources, willingness to recommend Qt, and future expectations and needs.
The average length of the interviews was 1 hour and 16 minutes. The interview protocol followed the recommendations in (Seaman 1999). The interviews were voice-recorded and captured into initial notes. Soon after each interview session, extensive and detailed notes were written by listening to the recording.
The first two interviews were conducted as pilot interviews. The pilot interviewees represented different kinds of stakeholders, which also gave an understanding on how the themes fit to different types of interviewees.
Data collection: student survey
We conducted a longitudinal student survey (right side of Fig. 2) to investigate how novice developers select, adopt and initially use frameworks.
Participants and procedure The target population was less experienced developers. To select participants to match this population, we conducted the study among Master’s level university students who were attending the User Interface Construction course in the department of Computer Science.
During the course, students developed an application that consisted of a UI and some mock-up functionality to enable interaction. Three versions of the application for various platforms were developed during the course. Although Qt was advertised by the course personnel, the students were free to use any framework and change it during the course. The application was created in teams of three students, and the project lasted approximately three months.
The longitudinal survey was timed in three phases to capture students’ expectations, initial experiences and final experiences (Table 2). Students were invited to participate in the study during a lecture and through the course web page. Participation was not mandatory, but students received two extra points for their course for responding to all three questionnaires. Out of 86 students who participated the course, 74 answered questionnaire I (response rate 86.0%), 64 students answered questionnaire II, and 51 students answered questionnaire III. The drop in participants is typical, and unavoidable in longitudinal studies (Ployhart and Ward 2011).
Out of those 51 students who responded to all three questionnaires, 57% were male. Their ages ranged from 19 to 50, with the majority (67%) belonging to the age group 20-25 years. The students had little previous experience with Qt: 6.8% of the students had tried out Qt and 6.8% had used Qt repeatedly. The students’ overall software development experience ranged from a few programming courses to 15 years of professional experience. About 30% of students mentioned having experience from hobby projects and 33% of the respondents already worked in a company as a software developer.
Survey design The longitudinal survey used similar design and quantitative measures as a previous longitudial study focusing on the role of expectations and experiences in service evaluation (Kujala et al. 2017). The survey included three questionnaires at three time points: before use, after three weeks of use, and after three months of use (Table 2). Enjoyment and subjective usability were measured in all three questionnaires to find out how they evolve over time: first the expectations, then the experiences. Questionnaire I asked about students’ background, experience of using frameworks and expectations, while questionnaires II and III focused on students’ experiences (Table 3). Enjoyment was used as a measure of how intrinsically motivating the framework was (Davis et al. 1992), and the statements that measured enjoyment were adapted from (Davis et al. 1992) and (Mitchell et al. 1997). Subjective usability of the framework was measured with the Usability Metric for User Experience (UMUX) (Finstad 2010). UMUX measures usability through three ISO 9241-11 dimensions: effectiveness, efficiency, and satisfaction. To study customer loyalty, we measured users’ overall evaluation of the framework using likelihood-to-recommend measure that can be used to calculate the Net Promoter Score (NPS), a strong indicator of customer loyalty and growth (Reichheld 2003). To study how the platform boundary resources support development, the students were asked to rate the different boundary resources and to identify their strengths and weaknesses (Table 3).
The questionnaires were piloted (Kitchenham and Pfleeger 2002) and corrected based on the feedback. In the pilot, two Master’s students and one student from the course answered the questionnaires and thereafter described how they understood each question.
The data analysis was split into two parts (Fig. 2). The qualitative analysis focused on the Qt framework and on answering RQ1. The quantitative analysis covered all of the selected frameworks to answer RQ2.
Qualitative analysis The analysis for RQ1 utilized two data sources: practitioner interviews and open answers from the student survey (Fig. 2). The analysis was conducted as qualitative content analysis using both a priori and emerging codes (Lazar et al. 2010). The coding of the content utilized open coding, similar to Grounded Theory (Strauss and Corbin 1998).
The process of analyzing the practitioner interviews (Section 3.2) started immediately after the first interview. The data to be analyzed consisted of the detailed notes written when listening to interview recordings. A priori codes, based on the interview themes and on existing knowledge on Qt, were used to mark and organize the data. Examples of such codes were social boundary resource and selecting Qt. The data was organized into an Excel spreadsheet, where each row contained the data corresponding to one code and each column corresponded to one interviewee. In addition, emerging codes were used to expand and refine the analysis; examples of emerging concepts included open source licensing and peer support. Initial analysis was carried out immediately after each interview. The results of the analysis and new codes were then taken into account in the following interviews and previous interviews were refined if needed. The cross-analysis of the interviews was carried out after all the interviews had been conducted; it combined data related to one code as well as compared data from different codes.
The process of analyzing the student survey results was based on the responses to the open questions in the student survey (Section 3.3). This analysis focused on the final experiences (questionnaire III) and on those students who selected the Qt framework (15 students out of 51). The open responses from these 15 students yielded rich qualitative content to be analyzed (four pages of plain text in total). The content was coded in similar way as the interviews: using predefined codes for each platform boundary resource type, and using emerging codes (e.g., deployment, taking into use).
Finally, a cross-analysis of interviews and survey was performed to answer RQ1. The data and analysis results were combined and categorized per platform boundary resources. We compared practitioner and student experiences for each platform boundary resource type, to identify any similarities and differences. Emerging concepts were used to construct main findings for each platform boundary resource type. An example of such an emerging concept was fragmentation of social boundary resources. In addition, we identified whether such concepts supported or hindered development, and to which development phase they were related.
Quantitative analysis The analysis for RQ2 was based on the quantitative data from the student survey (Fig. 2). The statistical analysis began with establishing the summary measures of the scales and assessing their reliability. Data were only included from those 51 students that answered all three questionnaires, so that the trends of the variables could be reliably analyzed. Moreover, 10 students were excluded from the statistical analysis for partly missing data. Cronbach’s alpha coefficients of enjoyment were.885,.867 and.954 from the first to the third questionnaire. Cronbach’s alpha coefficients of usability were.746,.747 and.870 accordingly, demonstrating a high degree of internal reliability of the scales. To study the relation between students’ loyalty to the framework and other study variables (Table 3), bivariate correlations were used to compute the correlation coefficients.