The full questionnaire can be found in supplemental File 1. Here we describe its component parts.
Translational Research Climate
To assess the climate for translational research, we have developed a standardized questionnaire, modeled after the SOURCE, whose 18 items fall into two groups: a) six items assessing the organizational climate for translational research on the institutional level, b) twelve items assessing the organizational climate for translational research on the level of one’s immediate research environment. Each item asks about the respondent’s perception of a particular aspect of the translational research climate in the organization as a whole or in the respondent’s immediate research environment, respectively, e.g. ‘How committed are the senior administrators at your institution (e.g. deans, executive board) to supporting translational research?’. Respondents rated items on a 5 point scale: (1) not at all, (2) somewhat, (3) moderately, (4) very, and (5) completely. To avoid forcing respondents to rate items, we offered two more response options: (6) no basis for judging, and (7) Prefer not to Disclose. This rating scale is identical to that of the SOURCE, apart from the very last option (‘Prefer not to Disclose’), which we added.
The selection and formulation of items underwent a three-step process. First, we assessed which of the 32 SOURCE items that could be adopted to our case, e.g. by replacing ‘responsible research’ by ‘translational research’. We settled on 18 items. The remaining 14 SOURCE items could not be translated in this way and were dropped, e.g. items about research misconduct. Second, we discussed our list of items among our project group, which included both medical researchers, experts on translational medicine, and social scientists. Third, to see how our items are interpreted and received within the target group, we conducted a cognitive pretest with five respondents of different status (graduate students, post docs, professors), gender, and working at different ends of the bench to bedside continuum (researchers, clinicians, clinician scientists).
Translational Research Practices
We further included a set of 15 items asking about the respondents’ self-reported translational research practices (section 2C of the survey) to be able to relate these to the perceived translational research climate. We constructed these items to reflect six dimensions of translational research practices—(1) education, (2) communication, (3) publication, (4) collaboration, (5) career path, (6) overall. We identified these dimensions using qualitative data (literature review and interviews with 78 researchers, clinicians, and clinician scientists) from a previous research project on translation (Blümel et al. 2015, 2016). The items were discussed and tested, together with the translational research climate items, in our project group and in our pretest.
Research Integrity Climate
We also administered the complete set of research integrity climate items of the original SOURCE—11 items for assessing institutional climate and 21 items for assessing the climate of the immediate research environment. This allowed us to investigate whether constructs in our translational research climate survey (STRC) were distinct from established constructs of research integrity climate (SOURCE).
“Results” section of the survey included questions about the professional status at the institution, primary departmental affiliation, enrollment in a doctoral program, number of years working in research, whether one’s research is preclinical, clinical, or both, professional areas of interest, size of the immediate research environment, gender, and year of birth.
At the end of each survey (SOURCE and STRC), we included a final free text question to allow the respondent to make further comments, e.g. ‘Are there any other things about your experience with translational research practices at your institution that you would like to tell and about which we have not already asked?’
Our study was approved by the Charité Ethics Committee (ethics vote number EA1/184/17) as well as the Data Protection Office and the Staff Council and was conducted as an anonymous web-based survey. For the use of the SOURCE, a license agreement was signed. The Charité administration helped us to identify all researchers and doctoral students working at the institution, 7264 individuals in total. We generated an equal number of electronical tokens and provided these to the study center’s administration, the latter of which created a linking table, assigning tokens to email addresses, and sent the researchers invitation emails with a token-unique-hyperlink to the online survey, followed by a reminder 2 weeks later. The emails were sent in the name of the Dean of the Charité. As an incentive to participate, per participant 2€ were donated to one of two preselected non-governmental organizations (NGOs) working in the field of health care, from which participants could choose after having finished the questionnaire. This study was preregistered on OSF (https://osf.io/qak8e/). The survey was online for a period of 4 weeks during February and March 2018. Of all 7264 invitees, 1095 opened the survey for at least one second, 969 answered at least the first question, 602 completed at least “Introduction” section (the SOURCE), 533 completed at least section 2B (the STRC), 523 completed at least section 2C (self-reported translational research practices). 521 invitees completed the whole questionnaire including all status and demographic items, resulting in a response rate of 7%. Informed consent was obtained from all individual participants included in the study.
A non-response analysis conducted after completion of the survey and sent to all invitees returned that a number of participants were interrupted during filling the survey but were not aware that they could continue later on or simply forgot to do so. Others were discouraged by the length of the questionnaire.
To guarantee the anonymity of the respondents, none of the authors had access to the study center’s linking table, assigning tokens to email addresses, and the study center, in turn, had no access to the raw data, linking tokens to individual responses. Three of the authors (AS, BH, MR) deleted the token column and further pseudonymized responses by aggregating birth years into 5-year intervals and omitting the answers of the free text questions. The resulting dataset and the corresponding codebook are available from the OSF database at https://doi.org/10.17605/OSF.IO/QAK8E.
When analyzing our data, we investigated two types of relationships, R1 and R2, as illustrated in Fig. 1. First, we established relevant factors for the STRC. Subsequently, we examined the identified factors for uniqueness with regard to the SOURCE questionnaire (R1). Finally, we investigated the relationship between the STRC and the items on translational research practice (R2). For the statistical analysis we used the statistical programming language R 3.5.1 (R Core Team 2018) with the additional packages ‘psych’ version 1.8.12 (Revelle 2018) and ‘lavaan’ version 0.6-5 (Rosseel 2012).
Exploratory and Confirmatory Factor Analysis
To determine factors that summarize the answers to the STRC on a reduced number of scales we performed an exploratory factor analysis (EFA) on the 18 items from the STRC part of the questionnaire. For this, we largely followed the methods used for the development of the SOURCE scales (Martinson et al. 2013), but made some methodological changes where more robust methods were available. The full analysis code, including a commented out version that is identical to the analysis in the original SOURCE study, can be found on OSF at https://doi.org/10.17605/OSF.IO/QAK8E.
In summary, we mapped the answers to a numerical scale ranging from 1 (‘Not at all’) to 5 (‘Completely’). We then determined the number of factors by Horn’s parallel analysis (Horn 1965). The factor analysis was conducted using weighted least squares estimation, which is better suited for ordinal data compared to maximum likelihood estimation (DiStefano and Morgan 2014) and a oblimin rotation, which does not force the factors to be orthogonal. Each item was assigned to the factor for which it had the highest loading. No further modification was done for the factors after the analysis. For this analysis, participants were excluded who did not complete the questionnaire or gave answers ‘No basis for judging’ or ‘Prefer not to disclose’ more than 50% of the time for the STRC part of the questionnaire, leaving the answers of 438 participants for the analysis (i.e. 83 participants with > 50% ‘No basis for judging’ or ‘Prefer not to disclose’ answers for the STRC part).
As we had only one-third of the participant responses available as were used for the original SOURCE validation (438 full responses compared to 1267 responses for SOURCE) (Martinson et al. 2013), we chose a slightly different approach for the confirmatory factor analysis. Instead of initially splitting the full dataset into two parts, we used the full dataset for the EFA for the initial estimate of the factors. Then the dataset was split into two random parts n = 100 times. Each time the EFA was repeated on one half of the data to assess the robustness of the factors under different partitioning of the data. A confirmatory factor analysis (CFA) using diagonal weighted least squares estimation was then performed on the second half of the data, fixing the factors as determined in the EFA on the first half of the data. The average goodness of fit parameters (χ2/df, CFI, RMSEA, SRMR) were calculated for the CFA models to assess the adequacy of the fit. Again, all 18 STRC items were used for the CFA.
Additionally, we analyzed the relationship between the STRC and the SOURCE by calculating the correlation coefficients between the newly established STRC scales and the original SOURCE scales.
Next, we analyzed the relationship between the STRC and the translational research practice questions using a regression analysis.
The following six dimensions of self-reported translational research practices were constructed prior to the conduction of the survey: overall (questions 2C01, 2C02), education (2C03), communication (2C04, 2C09, 2C10), publication (2C05, 2C06, 2C07, 2C08), collaboration (2C11, 2C12), career path (2C13, 2C14).
For the STRC questions, we used the factors that we established in the factor analysis. For both the STRC factors and the practice dimensions, we calculated the average scores for each factor/dimension as the mean of the scores for each question belonging to the factor/dimension. The average score was only calculated if participants answered more than 50% of the questions belonging to the factor/dimension.
To assess the relationship between the STRC factor scores and the practice dimension scores, we fitted a multiple linear regression model for each of the practice dimension using the three STRC factor scores as predictive variables. As the ‘career path’ dimension consists of two ‘yes/no’ questions, we fit a logistic regression model in this case. For this we categorized the answers for this practice dimension as follows: ((a) at least one of the two questions answered ‘yes’ (b) none of the questions answered ‘yes’. To account for the many different models that are tested, resulting p values were corrected using the Benjamini–Hochberg–procedure (Benjamini and Hochberg 1995).
The intraclass correlation (ICC) with respect to the institutions centers is calculated for the STRC factors to test if there is a difference in the perception of translational research practices within the surveyed institution.
For easier comparison with the results obtained in a study on the relationship between research integrity climate and practice for the SOURCE survey (Crain et al. 2013), we additionally repeated their analysis methods on the STRC (supplemental Table S3).