Background

The UNESCO Recommendation on Open Science (UNESCO, 2023) defines Open Science as an inclusive paradigm that aims to make scientific knowledge accessible, available, and reusable for all, regardless of language barriers - describing FAIR (Findable, Accessible, Interoperable, and Reusable) open research data as a central part of Open Science. The European Code of Conduct for Research Integrity (ALLEA - ECoC, 2024, Version 3, 2023) is a comprehensive framework in Europe outlining research’s professional, legal, societal, ethical, and moral obligations. Recognised as the primary benchmark for fostering “open and reproducible practices” (ALLEA - ECoC, 2024, Version 3, 2023, p. 5,) in the responsible conduct of research, it also acts as a blueprint for national and institutional codes of conduct. The code underscores essential principles such as reliability, honesty, respect, and accountability and also explicitly names the FAIR principles as a standard in data-driven research.

The FAIR guiding principles have, in fact, advanced the implementation of open science actions in data-driven research since 2016. These principles are derived from Wilkinson et al.‘s publication, “The FAIR Guiding Principles for Scientific Data Management and Stewardship” (Wilkinson et al., 2016), which offers guidance for easy access to and exchange of digital research.

Findability, Accessibility, Interoperability, and Reusability (FAIR) are the four key pillars of Wilkinson et al. principles (2016). Originally written for data publishers and data stewards, they build upon Data Citation Principles (Data Citation Synthesis Group, 2014) and contain 15 statements (Wilkinson et al., 2016). The application of the FAIR principles has widened in scope from data management since 2016, leading many others in the research system to adopt the FAIR guiding principles. This is how the principles started to guide also (future) researchers in open science actions.

The implementation of these principles has seemed – as early as 2018 - to be the future of first-rate academic research (Boeckhout et al., 2018). A paradigm shift towards enabling and using FAIR guiding principles promised – then and today – to facilitate and foster the reliable exchange of knowledge. By supporting standards for high-quality data sharing, the FAIR guiding principles bring benefits to research (such as standardisation, accountability, quality of scientific findings, reproducibility, and replicability) and to society (such as citizen participation in science, sharpened science communication, and political decision-making).

Organisations and projects worldwide have referenced the FAIR guiding principles and joined the chorus promoting sustainable open science actions and reliable research. Research Integrity Advocates (Moher et al., 2020 - Hong Kong Principles 2020) and the Reproducibility Networks (Munafò et al., 2022) with guidelines such as the Hong Kong Principles (Moher et al., 2020), the European Code of Conduct for Research Integrity (ALLEA-ECoC, 2024, Version 3, 2023), the Marseille Declaration, and scientific standards for journals (Nosek et al., 2015) are also part of the movement supporting the FAIR guiding principles. As more data-intensive research develops, the implementation of FAIR in (A) research infrastructures and in (B) research-performing organisations seems to be a prevailing and minimum goal for all (Toribio-Flórez 2021, Boeckhout et al., 2018).

Given the importance of researchers as open science drivers, higher education and other stakeholders have regarded teaching FAIR guiding principles as a preeminent mission (UNESCO, 2023, ALLEA - ECoC 2024, Version 3, 2023; Toribio-Flórez, 2021). As, for example, the 2021 Max Planck study explicitly laid out (Toribio-Flórez, 2021), these FAIR skills should become an important part of higher education.

In 2019, the Horizon2020 Path2Integrity initiative developed free FAIR training material to take steps to implement FAIR as a standard in higher education and create a culture of FAIR sharing (Hermeking & Priess-Buchheit, 2022). Similar educational actions were taken by the FORRT, PRIM&R, the UK Data Service, the Economic and Social Research Council, the Embassy of Good Science, FairsFair, Parthenos, the Consortium of European Social Science Data Archives, and others. However, in conjunction with the general community effort to implement the FAIR guiding principles and the current broad educational offerings of FAIR training, we (still) need to better understand which of these educational efforts work and which do not.

Recent studies by Koterwas et al. (2021) and Goddiksen and Gjerris (2022) summarise various teaching objectives and methods in such training, pointing to dialogical approaches (Koterwas et al., 2021), and advocating for longer term training to help students’ change mindsets (Goddiksen & Gjerris, 2022). Shanahan et al., (2021) specifically document the latest progress on how to teach FAIR, and Sira et al. (2024) and van den Hoven et al. (2023) evaluate various approaches to teaching academic integrity in general.

With references to these general guidelines towards more effective FAIR training, Pownall et al. outlined last year (2023) that we need rigorous, thorough, and transparent evidence on teaching open and reproducible scholarship. This research gap on the effectiveness of FAIR training has been present since the principles’ inception in 2016.

That is why the Path2Integrity project mentioned above (Prieß-Buchheit et al., 2020) decided in 2019 to design and offer FAIR training and evaluate the success and specifics of this training. The project designed student-centred learning programs for responsible conduct of research (Hermeking & Priess-Buchheit, 2022) for high-school and bachelor students (S-Series), master students (M-Series), and (early career) researchers (Y-Series). The M- and Y-Series contain sessions on the FAIR guiding principles called M8. Over three years, the Path2Integrity consortium offered this learning session, M8, and other sessions to international learners in different settings. Most of the sessions were accompanied by an extensive evaluation. At the end of the project, the collected data was made available via the repository Zenodo.

The following describes follow-up research on this Path2Integrity FAIR training and answers the following: Can we see a promising learning shift towards suggested FAIR actions and justifications after FAIR training? Which factors influence this learning process? What other additional advantages do we glean from these educational endeavours?

Methods

Design

This article uses Path2Integrity’s open data collection (Zollitsch & Wilder, 2022a) to examine whether FAIR training can foster the shift towards FAIR guiding principles in higher education. Our study examines international (future) researchers in FAIR training and their learning progress by comparing their scientific suggestions and justifications towards FAIR guiding principles before and after the training. The design of this study is thorough and rigorous due to a pre-and post-test design accompanied by a control group comparison. Also, the study is transparent due to open data, open tools, and clear documentation of how we conducted each step.

All information in this study relates to Path2Integrity (Prieß-Buchheit et al., 2020), a Horizon2020 project administered from 2019 to 2022. Path2Integrity advocated for research integrity and supported a positive relationship between science and society in the rapidly changing research landscape. By designing and implementing dialogue-based learning sessions (Priess-Buchheit, 2021; Hermeking & Priess-Buchheit, 2022) and promoting role models, the project fostered research quality and societal impact by engaging the next generation. Within the three-and-a-half-year research and innovation program, several dialogical training (Priess-Buchheit et al., 2021) and an extensive evaluation (van den Hoven et al., 2023) paved the way towards enhanced and effective educational practices and learning methods targeted at students, early career researchers, and everyone directly or indirectly involved in research, including educators and senior researchers.

To close the research gap on FAIR training, we used the results of Path2Integrity. We examined the project’s open data collection (Zollitsch & Wilder, 2022a) of the standardised FAIR training called M8 (Lindemann & Priess-Buchheit, 2021). By highlighting the data of the 90-minute online and onsite FAIR training in Path2Integrity’s open data collection, we show how much (future) researchers familiarise themselves with the importance and challenges of the FAIR guiding principles after the training. We hypothesised:

  1. 1.

    that FAIR training had a positive impact on both the suggested action and the justification of the students and would thus yield a clear shift in the response behaviour of the students in the post- compared to the pre-testing towards the correct answers in the P2I questionnaire (A1 and B1, respectively) in the intervention group;

  2. 2.

    that a particular training that focuses explicitly on FAIR training is necessary to produce this shift (if present), and we should thus not be able to reproduce the former effect (if it is present) in the control group;

  3. 3.

    the legal framework of the universities associated with the intervention group may impact how students justify their actions. In the pre-test, students in the intervention group may choose to justify their open science practice with legal frameworks over the reliability of research results.

In the second step, we

  1. 4.

    contrast the learning factors of the FAIR training with the help of the learners’ feedback (Zollitsch & Wilder, 2022b).

Although the study shows a rigorous design, it evaluates learners’ actions in the classroom and stays on the so-called second level of the TRIT framework (van den Hoven et al., 2023, p.14). We deliberately start with this design to see if there is a shift in the classroom and, in case of a positive result, go on to the third level and evaluate if there is also a shift in actions outside the classroom (van den Hoven et al., 2023, p.14) for which there is no data and questionnaire available at the moment.

In sum, we probe the short-term effectiveness of the FAIR training by analysing the data from the Path2Integrity open data collection. By comparing the learning success of both the intervention and control group, we assess if FAIR training is effective and what learning factors were rated highest.

Measures

The Path2Integrity open data collection contains data from the Path2Integrity learning card program (Hermeking & Priess-Buchheit, 2022; Priess-Buchheit et al., 2021), including the data from the Path2Integrity FAIR training. Path2Integrity is an open educational program that is free of charge and usable for all disciplines in higher education. The learning principles from the handbook describe the training standards (Priess-Buchheit et al., 2021). The program aims to teach students “how responsible research needs to be conducted to be reliable and thus useful for society” (Priess-Buchheit et al., 2021, p. 3).

The Path2Integrity open data was collected through:

• the online P2I questionnaire (Zollitsch et al., 2020, 2021), which focuses on scientific suggestions and justification referring to research integrity topics in the European Code of Conduct (such as FAIR guiding principles, research procedures, collaborative work, and research environment), and

• the online Path2Integrity feedback sheet (Niesel et al., 2021), which focuses on the learners’ immediate reaction to training.

Path2Integrity collected the data between August 2019 and January 2022. (See references Zollitsch & Wilder, 2022a; and Zollitsch & Wilder, 2022b for the data). The data group FAIR training (see Fig. 1) reflects the observation in Path2Integrity’s standardised FAIR training conducted in the Path2Integrity project. As the description of the training shows (Lindemann & Priess-Buchheit, 2021), the training sessions began with a citation of the European Code of Conduct for Research Integrity from 2017, which advocates that research should be “as open as possible and as closed as necessary” (ALLEA - ECoC 2024, Version 2, 2017). Trainers outlined that despite the benefits of open data, nevertheless, various reasons for it being closed could be cost, ethical, and legal. Moreover, trainers emphasised that research data should be FAIR. Learning goals in this training sessions were, among others, to compare and prioritise different handlings of proper data management and to explain and justify arguments for FAIR data management (Lindemann & Priess-Buchheit, 2021).

Fig. 1
figure 1

Flowchart of the data selection from the Path2Integrity open data collection (Zollitsch & Wilder, 2022a)

Path2Integrity collected the data and informed consent sheets anonymously via online surveys and stored them safely at Kiel University. To calculate the measures of this study, we used the published open data collection (Zollitsch & Wilder, 2022a, b) with the support of the evaluation work package team.

The P2I evaluation form M contains two FAIR-related questions (hereafter referred to as SPM8 and SCSM8), each with four possible answers.

In SPM8/A, students answered the multiple-choice question:

“In his research project, Ali has collected a large amount of research data that he would like to make available open access in accordance with the FAIR guiding principles. To follow good research practices, Ali ensures that his data …” (Please choose only one of the following: ).

• A1: are described with rich metadata to be machine-readable.

• A2: are stored on FAIR foundation servers.

• A3: can be found in every database possible.

• A4: do not contain any information about sexual orientation.

In SCSM8/B, students answered the question:

“Ali’s decision (above) is in line with good research practices because …” (Please choose only one of the following: ).

• B1: it ensures reliable research results.

• B2: it ensures the equal treatment of all research data.

• B3: It is Ali’s duty to follow this process.

• B4: the legal framework governing universities requires it.

The sample size from the data collection (Zollitsch & Wilder, 2022a) is as follows:

nintervention_pre−test 96 with 6 missing values;

nintervention_post−test 78 with 3 missing values;

ncontrol_pre−test 418 with 34 missing values;

ncontrol_post−test 163 with 7 missing values.

Table 1 summarises the data sample and the characteristics stratified for each answer.

Table 1 Data sample characteristics for SPM8/A and SCSM8/B

Question SPM8/A targets the students’ scientific action, whereas SCSM8/B aims at their justification in SPM8/A. Path2Integrity expected students to answer A1 and B1 (Zollitsch et al., 2021). The answers A2, A3, and A4 are mere distractors from A1. However, Zollitsch, Wilder and Priess-Buchheit (2021) explain that in the case of SCSM8, the answers B2, B3, and B4 are different justification patterns.

The feedback sheet (Niesel et al., 2021) refers in cases of (no-)FAIR training to the following eleven questions:

Motivational Factor: My participation in the (no-)FAIR training was encouraged by the trainer.

Instructional Factor: For me, the (no-)FAIR training was adequately guided.

Safe Space Factor: I could express my opinion freely in the (no-)FAIR group.

Participation Factor: I was able to contribute something to the (no-)FAIR group.

Appropriateness Factor: The duration of the (no-)FAIR training was appropriate to me.

Comprehensibility Factor: I clearly understood the task of the (no-)FAIR training.

Commitment Factor: For me, the structure of the (no-)FAIR training was good to follow.

Satisfaction Factor: I am satisfied with the (no-)FAIR training as a whole.

Trust Factor: I would recommend the (no-)FAIR training to my fellows.

Usefulness factor: I have learned something useful in the (no-)FAIR training.

Practical relevance factor: I could connect the (no-)FAIR training with my everyday life.

Next to the two FAIR-related questions SPM8 and SCSM8 above, the Path2Integrity feedback sheet dataset (Zollitsch & Wilder, 2022b) shows that 95 students (feedback) answered these Likert scale questions. We recorded the answers, with 2 being the positive end of the Likert scale, 0 being the neutral middle, and − 2 being the opposing end.

Participants

From the Path2Integrity open data collection (see Fig. 1), we selected the sample of students who voluntarily attended the international “FAIR training” (intervention group, n = 96 in pre-test) and a sample who filled out the questionnaire (control group, n = 418 in pre-test).

According to the Path2Integrity open data metadata, “the control group was collected in two rounds, mainly between March 2021 and January 2022. The students for the control groups were mostly European students whose educators embedded the [questionnaire] into their courses. Therefore, they were also contacted through their trainers. Due to differences in intensity and content of the courses and to increases in the quantity of the non-randomized control group, both courses in research integrity, responsible conduct of research, good research practices, scientific working, research ethics or related topics, as well as non-related courses, were included in the control groups. In total, we reached out to 864 trainers … From these, 60 trainers allowed us to conduct [our questionnaire] within a total of 89 of their groups.” (Zollitsch et al., 2022).

The study’s intervention group participated in a one-day, role-playing online training that included a 90-minute FAIR training with the standardised learning card M8 (Lindemann & Priess-Buchheit, 2021). Path2Integrity asked all students to fill out the questionnaires voluntarily. As Table 2 shows, this data was collected online at different times.

Table 2 Data collection before and after the training

Table 3 displays the professional, disciplinary, age, country, and gender distribution of the intervention and control group in the pre-test.

Table 3 Distributions of intervention and control group in the pre-test

Table 2 outlines that students in the intervention group were given version M of the P2I questionnaire (Zollitsch et al., 2020) once at the beginning and immediately at the end of the training. The control group answered the same questionnaire (Zollitsch et al., 2020) at the beginning and end of their no-FAIR training. Also, all students in FAIR training were asked to fill out the feedback sheet (Niesel et al., 2021).

Procedure

Completing the questionnaires was voluntary, and not all students took part in pre- or post-evaluation. Because Path2Integrity did not give any incentives to attend their questionnaires, we estimate that some students were tired and dropped out before collecting the post-test. (See below our comment on attrition rates.)

For each hypothesis (see above), we evaluated whether the FAIR training impacted the students’ response behaviour via Pearson’s -chi-square test, with the null being that response behaviour is independent of pre-and post-testing. In the case of hypothesis 3, in which we explicitly targeted the answer category B4, the data were first collapsed over the B4 column to obtain a 2 × 2 table. We regard a p-value of less than 0.01 as statistically significant. In case of rejection, we planned to evaluate the source of the shift by looking at Pearson’s standardised residuals of the fit.

We regard standardised residuals with absolute values above 2 (above 3) as an indication that the respective cell has an impact (strong impact) on rejecting the null. As a measure of association and to evaluate the effect size of the shift towards the respective answer, we present the odds ratio for choosing the respective answer (A1, B1, or B4, respectively) over the other categories. Furthermore, we assessed via a volcano plot which of the learning factors students ranked highly positive for.

Limitations

As described above, the study’s data is from non-randomized students. We also detected that data from the intervention group “only” has two group indicators, G2 and G3, referring to two different trainers.

We consider this study to have an acceptable risk of bias for a quasi-experimental design, which may lead to indirectness of evidence and very little inconsistency (heterogeneity). The quality of evidence in our quasi-experimental design is lower than in randomised controlled trials. Nevertheless, this study uses a standardised instrument and controls more circumstances than usual using standardised training (Priess-Buchheit et al., 2021). Herein lies why this study is a reliable first source towards closing the research gap on FAIR training in higher education. The following paragraphs comment on our prior assessment of possible factors affecting the quality of the study‘s evidence.

Because we used a quasi-experimental and not a randomised design, we accept that Path2Integrity did not blind the students. Path2Integrity partners experienced that the students were aware that there were other (no) FAIR training sessions. There was no cross-over effect because they could voluntarily choose to participate. We rate the effect of non-blinding as marginal and the risk of bias in this domain as very low.

In contrast, we have a medium risk of bias due to incomplete outcome data and missing participation (see Fig. 1). The attrition rate is high but in a normal range for educational online studies (Zhou & Fishbach, 2016). We outlined the dropout rate of students no longer available to participate in the study and absent for the post-test in Fig. 1.

Furthermore, we did not preregister this study. However, we wanted to be as thorough as possible. To lower the risk of bias due to selective outcomes, we report all data and highlight the leading and meaningful results in the text and easy-to-read tables and graphics. We report our intervention and control group results using odds ratios.

Nevertheless, we have a medium risk of confounding factors within all groups. However, we decided to proceed with evaluating the FAIR training because otherwise, we would stay empty-handed. We accept this medium confounding factor risk in favour of demonstrating factors that may influence FAIR training and can be controlled in future studies.

Overall, this study “only” assesses the short-term effectiveness and observed factors of a 90-minute-long FAIR training. On the one hand, such a short training might be a limitation; on the other hand, a significant shift after a 90-minute session could and is a promising result.

Results

Regarding hypothesis 1: Is FAIR training effective?

The odds of students suggesting a scientific action that aligns with FAIR guiding principles increases 3,75-fold after attending the FAIR training, leading to 46.7% of the students suggesting FAIR actions after training compared to only 18.9% before the training (see Table 4). The 99%-confidence interval of the odds ratio is (1.50, 9.37). The p-value for rejection independence of pre- and post-testing is p = 0.0019 (\(\:{{\chi\:}^{2}}_{3}=14.8088\)). We regard both the size and the rejection of the test as significant.

Table 4 Marginals for pre-and post-testing

Table 5 shows Pearson’s standardised residuals of the model fit. Here, the rejection of independence of the axis is the result of a shift towards suggested scientific action in line with FAIR guiding principles (A1) in the post-test from all distractor categories. A clear impact of the FAIR training is visible, diverting answers away from “are stored on FAIR foundation servers” (A2) (Table 5).

Table 5 Standardized residuals of the scientific action

We also expected that students would modify how they justify their FAIR actions. The odds of students justifying their scientific action to ensure reliable research increases considerably, estimated 1.09-fold, after the training. The 99%-confidence interval of the odds ratio is (0.47, 2.50), and the p-value for rejecting independence of the response pattern to pre- and post-testing lies at p = 0.0553. Thus, they should not be seen as statistically significant. Notably, the p-value for rejecting independence is peculiarly low for this small effect size.

Table 6 Standardized residuals of the justification

Table 6 shows Pearson’s standardised residuals of the model fit. As column B1 shows, the deviation of the data to the null model is minimal in the B1 cells. In contrast, though, the large absolute values in the cells of column B4 (> 2.5) deviate from the data and the null model. These values document a discrepancy of well above two standard deviations. This discrepancy clarifies why we see such a low p-value for the overall statistical test. Thus, the FAIR training not only had no impact on how students justified their FAIR action but in addition, the most substantial shift in the response behaviour is not towards B1 (“it ensures reliable research results,” the expected answer) but towards B2 (“it ensures the equal treatment of research data”). The shift results in an odds ratio of 1.87 for answering B2 after course completion. The 99%-confidence interval for effect is (0.75, 4.67). This increase is notable despite the effect, in and itself, being measured as statistically insignificant. See the discussion on Zeitgeist below for a possible explanation of this finding.

Regarding hypothesis 2: Is there an effect in the control group?

We could not reproduce the significant change towards suggested FAIR actions from the intervention group in the control group, where students did not receive FAIR training. The control group received very similar training with a different (no FAIR) focus, and we can show there is an estimated odds ratio of 1.31 (99%-confidence interval is (0.72, 2.39), p-value 0.6433) for choosing an action that aligns with FAIR guiding principles. This effect leans far from statistically significant.

The estimated odds for choosing a justification for their action that aligns with FAIR guiding principles even decrease 0,99-fold (99%-confidence interval is (0.58, 1.70), p-value 0.3471). The effect here is also far from significant.

Regarding hypothesis 3: What is the relation between universities’ legal frameworks and FAIR training?

We hypothesised that students from universities with legal frameworks on FAIR guiding principles choose to attend voluntary FAIR training. In this case, students in the voluntary FAIR training may justify their scientific actions more often with the legal framework. Indeed, the odds ratio to justify their scientific actions with a legal framework is 1.56 for students in voluntary FAIR training concerning students in the control group. However, the effect is not statistically significant (a 99%-confidence interval for effect is (0.30, 1.17), p-value 0.0472).

Regarding 4: Which learning factors can we observe in FAIR training?

Furthermore, the FAIR training received purely positive results in the immediate feedback. Specifically, the “safe space factor” question was answered with the highest mean of 1.47 in the FAIR training, with 2 being the positive end of the Likert scale, 0 being the neutral middle, and − 2 the opposing end. The appropriate factor ‒ “I found the length of the learning units appropriate.” ‒ was rated lowest, but still positive with 0.75 in the FAIR group.

Fig. 2
figure 2

Vulcano plot of FAIR training factors showing the differences in the averages (the effect sizes) on the x-axis and the negative of the logarithm to base 10 of the p-value on the y-axis

The volcano plot (mean response on the x-axis and p-value of a symmetric one-sample t-test with H_0: the expected answer is neutral on the y-axis) displayed in Fig. 2 shows that all 11 learning factors from the feedback deviate positively from the neutral middle. To consider the multiple testing problem, we only regarded an effect with a -log10(p-value) of above -log (0,01/11) = 3,04 as statistically significant.

Statistical significance is the case for all learning factors. The five FAIR learning factors with the highest positive feedback are:

1. safe space: “I could express my opinion freely in the group” (mean 1.47, -log10(p-value) = 32.07).

2. participation: “I was able to contribute something to the group.” (mean 1.32, -log10 p-value 26.57).

3. motivation: “I was encouraged by the trainer to participate actively” (mean 1.27, -log10 p-value 27.77).

4. usefulness: “I have learned something useful” (mean 1.27, -log10(p-value) = 25.84).

5. satisfaction: “I am satisfied with the learning unit as a whole” (mean 1.22, -log10 p-value 23.91).

Discussion and Conclusion

The results above show that FAIR training works. FAIR training supports open science actions in higher education by implementing FAIR guiding principles. The findings underline and support the current efforts to use FAIR training as a strategy.

Our findings about FAIR training in a nutshell:

• (Future) researchers’ proficiency in suggesting a scientific action aligned with FAIR needs improvement.

• The 90-minute FAIR training shows high learning success for learners from different disciplines and qualification cycles in higher education.

• Institutional incentives such as university guidelines on FAIR guiding principles may push students to register (even) in extracurricular voluntary FAIR training.

• Learners from higher education say they learn something useful in FAIR training.

First, a noticeable and unexpected result is the FAIR scientific action score, which was 18.9%, rising to 46,7% in the post-test. Explanations for this success in FAIR training give new possibilities in nurturing open science actions while using little timely effort with a “big” impact.

However - and second - comparing the suggested scientific actions and justifications of FAIR guiding principles, most students could only suggest a suitable scientific action and no justification after participating in the FAIR training. Justifying research actions requires deeper insights, skills, and a fitting mindset. We link the stagnation of how students justified their actions with Goddiksen’s and Gjerri’s (2022) request that changes of mindsets take (training) time.

Third, we connect the unexpected increase in answer B2, “it ensures the equal treatment of all research data.” with the use of Path2Integrity learning methods “role play” (Hermeking & Priess-Buchheit, 2022; Priess-Buchheit et al., 2021), capturing the Zeitgeist (Koterwas et al., 2021; Pownall et al., 2023; Fuentes et al., 2020), we are currently living in. Role-play is a method to encourage cooperative learning, thus fostering perspectives from different people (D-diversity), fair treatment, access, opportunity, and advancement for all learners (E-equity), and inclusive decision-making processes (I-inclusion). At the same time, achieving equity and equal chances are objectives that are currently implemented not only in higher education reforms but also in society at large (Fuentes et al., 2020). In addition, role-play also stands for the current shift from a teacher-centred approach to a student-centred approach to teaching (Koterwas et al., 2021; Pownall et al., 2023), giving students more free space to develop their ideas and participate in group discussions (Hermeking & Priess-Buchheit, 2022; Koterwas et al., 2021). Due to the training methods (Hermeking & Priess-Buchheit, 2022; Koterwas et al., 2021; Pownall et al., 2023) and current trends (Fuentes et al., 2020), students may have transferred this process from “how” they learned to “what” they learned.

Another training result from the Path2Integrity training supports our Zeitgeist explanation, the so-called “research procedure training” result from Path2Integrity (Zollitsch et al., 2022), which also used role play as the main learning method. With training, 14% more students in the research procedure training also ensure equal treatment in research by their scientific actions. Without training, only 4% do (Zollitsch et al., 2022, p.60). Maybe implicit learning, which the training did not intend, induced the shifts towards “it ensures the equal treatment of all research data”. This finding on how the answer “it ensures the equal treatment of all research data” increased in two 90-minute sessions reflecting the communal and equal learning process is critical and, in the future, needs to be assessed.

Fourth, looking at FAIR training gateways, universities with legal frameworks on FAIR guiding principles seem to push students to attend voluntary FAIR training. This finding points to the strategy that the FAIR guiding principles should either be part of the university regulations and result in sanctions for non-compliance or that FAIR training should become mandatory for (future) researchers affiliated with the university since change is less likely to happen if using FAIR is voluntary.

Fifth, students rated FAIR training as very useful and satisfactory. The positive feedback of the learners from within voluntary FAIR training displays the need to open the doors for a new generation into the “new normal” called open science. As shown above, learners considered they had gained helpful knowledge after a motivating and engaging FAIR training. These findings align with the educational development towards student-centred training (Koterwas et al., 2021; Pownall et al., 2023). Important learning factors in effective FAIR training seem to be creating a safe space, letting students contribute, and encouraging students to engage in the training. The effectiveness of these learning factors needs to be examined in future studies.

Six, considering that open science principles, research, and academic reform are sociocultural concepts, we additionally reflect the results in light of their geographical distribution. Table 3 shows that many (29.79%) of the intervention group came from the United Kingdom, whereas the main part − 36% and 33% - of the control group came from Germany and Poland, respectively. As Videnoja et al. (2024) show, the UK, Germany, and Poland have different coordination approaches in research integrity training. Whereas the UK and Germany have nationally coordinated workshops and training trainer programmes, Poland has no nationally coordinated training program. In particular, the EOSC Observatory (https://eoscobservatory.eosc-portal.eu/home) shows that all three countries, Germany, Poland and the UK, have no national monitoring on skills/training for Open Science. The participants came from many higher education institutions, and a detailed landscape of their Open Science training is unavailable. Although there is a difference in the geographical distribution between the intervention and control groups, the study’s pre-test shows no difference between the groups. At the pre-test, 18.89% had positive results in the UK-dominated intervention group and 18.25% in the German and Polish-dominated control group). This seems to indicate that although there are different sociocultural contexts concerning Open Science, the knowledge of higher education students is at the same low across the geographical distribution. Further research on open science training in sociocultural settings will be able to reflect on whether cultural trends influence FAIR training differently in Europe’s sociocultural settings.

In summary, by participating in the FAIR training, students familiarised themselves with FAIR guiding principles and learned to be advocates and champions of the new normal, open science. The study also reveals the need for further training improvement, particularly in enhancing students’ ability to justify FAIR actions, presumably taking longer than 90 min. However, the study shows that initiatives like the learning programs mentioned above are a compelling step toward supporting and promoting open science. Nevertheless, these initiatives should be part of continuous educational efforts, not merely a stand-alone project or an add-on.

Promoting these FAIR guiding principles is key to fostering future high-quality research sharing, and - as the study shows - training towards FAIR is one effective strategy for open science in higher education.