Challenge Statement

In the absence of convergence research within graduate programs, inexperienced graduates may experience interdisciplinary communication gaps upon entering the workforce, primarily in how students hold themselves professionally with the ability to communicate, organize, and perform as team members [1,2,3]. A multi-institutional, comprehensive framework that equips students with the necessary tools and skills to thrive in a transdisciplinary work environment is needed. We propose a multi-faceted framework to bridge the expertise gap and prepare students for the complexities of real-world problem-solving. Our approach relies on three key components: project-based learning, convergence research, and Merrill’s first principles of instruction for structure.

Project-Based Learning

The first key component that allows our framework’s effective implementation is project-based learning (PjBL). PjBL centers instruction around real-life problems; our participants are presented with real-life biointerface problems by the experts leading mini-capstone projects. Utilizing PjBL methods allowed us to immerse students in authentic, interdisciplinary challenges while teaching technical skills like basic programming and machine learning and transferable skills like collaborating in a team to solve research problems. This approach provides students with experience and tools to better equip them for convergence research after graduation, and the students view their participation positively [4]. Bioengineering training needs to change over time, and project-based learning can emphasize this attribute.

Compared to Miller et al. 2023 [5], their principles align with our project; we aim to innovate our solutions through convergent research as teams and train students in interdisciplinary skills – explicitly training them in the biological or material context and the data science context. During the Fourth BME Education Summit, the presented workshops covered both project-based learning and problem-based learning; however, their focus was on problem-based learning specifically because they mention using “ill-defined assignments” and team-based learning, showing that these topics are relevant in the bioengineering curriculum [6]. Furthermore, project-based learning fosters the ability for students to approach the problem as an interdisciplinary opportunity and actively and creatively innovate from gained knowledge [4, 7, 8].

Convergence Research

The second vital component of our framework is to perform convergence research with the participants. The primary goal for including convergence research is to foster collaboration across diverse disciplines to find innovative solutions, even viewed as an enhanced development of interdisciplinary research with specific disciplines coming together for innovation [9]. Innovative solutions could be novel approaches, novel technologies, or new hypotheses addressing science or social issues. We promote the creation of innovative solutions by building convergence science teams where the members, spanning fields such as computer science, biological sciences, and engineering, work collectively and combine their disciplinary expertise to tackle complex problems from the earlier component.

The NSF recognizes the need for successful projects in convergence research within the science and engineering domain and reiterates that convergence approaches must have specificity in their problem; the experts prepare their teams by readying them with the necessary interdisciplinary technical skills for all team members; experts then demonstrate and integrate their knowledge into a novel approach and involve undergraduate and graduate students and allow the students to develop skills outside of their discipline [10].

It is hard for our education system to continually incorporate new knowledge into our curriculum, train our educators, and prepare our students for the workforce; for example, Herr et al. 2019 state, “Convergence accelerates the creation of new knowledge that does not fit neatly into traditional curricula [11].” According to their workshop, our framework fits into items they deem necessary to enact in convergence frameworks. We are using our framework to train our students with ‘holistic convergence education,’ the students are gaining the transferable and technical skills needed to participate in convergence research [11]. It is crucial to train the students to be continual learners to keep up with the expanding knowledge and training necessary to enter their careers.

A similar framework is Yale's Integrated Graduate Program Physical and Engineering Biology. While being a program with a course curriculum that spans semesters, their framework also aims to prepare students for careers where they need a wide range of interdisciplinary skills, like our framework. Their Integrated Workshop course is very similar to our mini-capstones, with faculty involved with the modules, pairing students from different backgrounds, multiple modules to choose from, having a goal of team science and skill building, and their course also includes meetings with the experts. Two key differences are that we have not incorporated outreach to the public or K-12 students, and our workshop is not integrated into participating institutions’ curricula [12].

Some frameworks discuss the use of cloud-based tools, like our use of Google Classroom and Google Colaboratory[13, 14], and that the inclusion of these tools within convergence architecture to assist in developing the professional, collaborative, and technical skills the students need for career development [15, 16].

Our framework intertwines problem-based learning with mentorship from principal investigators and collaboration with postdoctoral researchers utilizing the convergence research approach in each of the five areas diffuses into each (Fig. 1). By following this model, we supply a blueprint for other transdisciplinary projects. This initiative successfully bridges the expertise gap and nurtures practical convergence learning experiences within computational biointerface (material–biology interface). This guidance by experts in the field allows students’ previous experience to guide their input as they develop transdisciplinary skills.

Fig. 1
figure 1

Merrill’s first five principles of instruction. The first principle concludes that instruction should center on a real-life problem. The learners then activate new knowledge with a foundation of existing knowledge and demonstrations of how tasks are performed when encountering the problem; learners are allowed to apply and practice their knowledge to their real-life problem. Lastly, the learners can integrate their new knowledge by transferring skills across other disciplines or unrelated problems. This figure shows how, when using these principles, the real-life problem is diffused throughout the other principles and is cyclical and iterative. Figure adapted from Merrill 2002, Created with BioRender.com

Merrill’s First Principles of Instruction

Lastly, an essential component of our framework is to guide our participants in learning convergence research skills via a structured framework (Fig. 2), which lends itself to the adaptation of Merrill’s first principles of instruction [17, 18]. The first principle, centering the learning on a real-life problem, is implemented through the PjBL approach and in-person workshops. To apply the second principle, which focuses on building on a student’s prior knowledge, the field experts identified base knowledge requirements for each project and screened students accordingly. Demonstration, the third principle, was used as a teaching tool within the projects and workshops to explain desired outcomes. The concluding in-person workshop, where the teams presented their projects and finalization occurred, implements the fourth principle: the students could apply their knowledge and skills in an authentic context. To further participants’ continuous improvement, integration (fifth principle) occurs as students use their knowledge and experience from the program in other applications post-workshop, such as presenting at the IEEE conference, collaborating on new projects with other laboratories and producing journal articles, or utilizing their new skills in their early career. This integration allows students to continue to work in convergence research beyond the initial program. By combining project-based learning, convergence research, and structuring the framework around Merrill’s first principles of instruction, our approach provides a methodical and effective solution to prepare students for the demands of working in dynamic and interdisciplinary professional capacities.

Fig. 2
figure 2

DDMD Advanced Data Science Workshop Framework. This figure explains how the many facets of the workshop lean into Merrill’s first five principles, guiding the participants to perform convergent research and complete the workshop. The participants eventually graduate and have gained a variety of skills to enter a variety of careers. This framework leads to convergent research artifacts that prepare students for continued real-world applications.

Novel Initiative

To address communication challenges and potential gaps while aligning current graduate’s skills with the research laboratory’s expectations, we established a comprehensive synthetic framework (Fig. 2) engaging experts across three disciplines – microbiology, material sciences with a focus on 2D materials, and big data with a focus on Artificial Intelligence and Machine Learning – to engage in interdisciplinary projects. This collaboration facilitated productive dialogues between computer science and bioengineering students, enhancing mutual comprehension and enriching their understanding of each other's fields.

To bolster competencies, we address the skill gaps of graduate students and bring experts and students together for collaborative convergence research. Expert scientists or postdoctoral researchers created mini-capstones and led our participants throughout the process. We could then embrace portions of the Pedagogy-Andragogy-Heutagogy (PAH) continuum [19] – a combination methodology empowering learners to use firsthand experiences for more profound understanding, and both the teacher and the learner are responsible for the success in the outcome of the workshop. The expert guides the team, focusing on problem-solving in a graduate education convergence framework, and the learner is self-directed, with an emphasis on the competency and capability of the participants [19].

This solution centers on enhancing students' aptitude for interdisciplinary teamwork problem-solving. By integrating 2D material science with computational biointerface, we introduced a convergence framework for graduate students. This framework, guided by senior scientists and postdoctoral researchers, embraces project-based learning, structured by the first principles of instruction by Merrill [17, 18], convergence teaming, and short course modules. Other facilities have implemented convergence research training and support; some include e-learning modules [20, 21]. Ultimately, this culminates in the student’s participation in transdisciplinary projects and preparation for future careers.

Development and Implementation

Convergence Problem and Team Science.

Our framework, Data-Driven Material Discovery (DDMD) Advanced Data Science Workshop, incorporates Merrill’s first five principles [17, 18]. It was built through a multi-institutional Data-Driven Material Discovery Program. The complete program spanned 18 months (about one and a half years) and included numerous meetings, workshops, and dedicated office hours (Fig. 2). The framework’s architecture utilized team projects, project-based learning, online training modules, and data collection for workshop improvement, utilizing surveys during each step. Through this platform, we engaged their existing knowledge with short courses, certificates, and mentorship, which cultivated collaboration to enhance the second principle. We took students from their previous knowledge and expanded their knowledge to improve their interdisciplinary skillset. We promoted learning by enabling participants to demonstrate, apply, and integrate their new knowledge [17, 18]. Aligning our objectives across the discipline skills learned enabled this integration of their new knowledge [22].

Mini-capstone Projects and Team Selection

Merrill’s first of the principles asserts that problems pertinent to the real world are most impactful to effective learning [17, 18]. We used this to our advantage when developing the mini-capstones for this framework. Faculty from multiple institutions worked to create this framework, along with some colleagues and postdoctoral researchers from their laboratories.

The program started with nine experts in four fields, creating 14 projects in data science (Artificial Intelligence/Machine Learning), material science, biofilm engineering, and biointerface (Table 1). Nine experts across three disciplines were assembled: microbiology, material sciences with a focus on 2D materials, and big data. The experts created a set of modules, or short courses, that pertained to the knowledge participants needed to participate in the convergence research projects.

Table 1 Mini-capstone projects

P001-P008 were selected by participants as the eight selected projects for an in-person workshop.

Students from the experts’ labs were invited to participate, and at the end of the pre-workshop and skills short course workshop, the leaders of the projects discussed their project goals. Afterward, students chose two or three projects in which they were interested. Eight of the 14 projects were selected for an all-day workshop (from an interest survey), where participants formed eight teams that tackled digital image processing, gene expression analysis, and material prediction.

Over 20 learners formed eight teams to tackle the mini-capstones, which included topics such as digital image processing, gene expression analysis, and material prediction. Each project team was led by a postdoctoral researcher or scientist representing a related scientific field (computer, material, biofilm). The expert selected their team based on project interest, all while balancing expertise in discipline and skill between the graduate students and undergraduate students, requiring at least one person from each discipline (computer, material, biofilm). Our focus was to balance the disciplines to encourage collaboration and learning to talk across disciplines and to balance career levels for skillset targeting to encourage a wide range of skills and training for younger career participants. Organizing the teams this way facilitated a transition from initial skillsets to a convergence skillset, and the participants were able to tackle real problems, allowing synergy of their learning of skills, collaboration, and convergence research.

Online Training with Short Courses, Office Hours, and Pre-Workshop Meeting

From the selected eight projects, the field experts determined the base knowledge required by the participants and curated short courses accordingly. Each short course included topic-specific learning objectives, which allowed us to activate participants’ prior knowledge as a foundation for new knowledge (Merrill’s second principle) [17, 18]. The experts used Jupyter Notebook [23] and Google Colaboratory [14] to reduce access barriers for the participants. The short courses were housed in a Google Classroom [13] for the graduate and undergraduate students to go through at their own pace. Each mini-capstone project had a project outline document describing the science problem followed by computational tasks to address them. The participants also had access to a template notebook as a project programming playground on a toy dataset (Online Resource 1). An orientation workshop for these modules allowed the students to choose which projects to join for the remaining workshops and office hours.

Our short courses consisted of nine topics, and we kept the courses available to the participants for six months after the conclusion of the workshop. These courses gave the participants background knowledge of the projects’ topics. Given the convergent nature of these projects, it was essential for the groups to meet and collaboratively address questions. Therefore, the small groups met twice during the interim times between workshops. Each project group was required to have three office hours led by the project expert, a two-hour office hour introduction to the project, and a subsequent one-hour work time and question/answer session office hour.

Certificates and Short Course Workshop

Our approach to tackle technical skill training was to promote learning new knowledge and have the learners demonstrate this knowledge [17, 18]. Our experts implemented certificates for participants who finished their courses by passing corresponding quizzes or assignments. The certificates obtained from the short courses demonstrated the expansion of the student’s existing knowledge, allowing them to improve their skills by acquiring new interdisciplinary subjects.

In-Person Workshop, Group Work, and Presentations

The comprehensive process of project creation, team formation, and equipping participants with skillsets for project engagement culminated in an in-person workshop where project presentations and finalization occurred. This approach facilitated ongoing learning for participants, as promoted through the application of new knowledge, Merrill’s fourth principle [17, 18]. For our final meeting, we had an in-person workshop with the option for attendees to join online via Zoom. During the first half of the day, one discipline presented the projects to the whole assembly. Subsequently, we allocated time for collaborative group work, during which the students presented the new status of the projects. Upon completing this workshop, students had opportunities to ask questions and continue working on their projects with their teams.

Artifacts and Conference(s)

The integration (Merrill’s fifth principle) of new learning was accomplished as students applied their acquired knowledge, skills, and experiences from the workshop to other contexts by producing IEEE presentations in both 2021 and 2022 conference proceedings and publishing machine learning models [24,25,26,27,28,29,30,31,32,33,34,35,36,37], an important aspect of continuous improvement. The teams could choose to continue their work after the conclusion of the workshop. Participants could prepare journal articles across disciplines, present at the IEEE Bioinformatics conference, and have more choices in their early careers. For instance, three bioengineering participants secured bioinformatics positions in the biotechnology industry; this not only highlights their ability to use their new skillset of convergence research but also signifies the successful transition into careers that were once seemingly unavailable to them, thus creating a future of convergence research possible for the participants.

Data Collection

Our external evaluator distributed pre-and post-workshop forms and a small group form for participants during office hours via email using Survey Crafter Professional 4.0. The questions given pertained to relevant convergence team building data (e.g., their DDMD program role (academic position), previous experience with various skills, goals), and we asked for feedback on what participants thought went well, strengths, and what could be improved (Online Resource 2). Out of our 22 participants, we had various amounts of responses to the surveys given, as detailed in the reflection section.

Reflection

Short Course Workshop

Following the short course workshop, 13 participants rated the quality (1 to 4, 1 low quality and 4 high quality) of six workshop facets via survey. The following table displays the frequency distributions of recorded ratings and mean ratings (Table 2).

Table 2 Quality of six workshop facets

Along with these ratings, participants were asked, “What are your expectations for this workshop?” referring to what they expected to get out of the 18-month experience in an open-ended format. We compared these responses with our participants’ thoughts on the question “What were the strengths of the workshop?” after the in-person workshop using a Jupyter Notebook python program to create a word cloud with prevalent to words submitted in response (Fig. 3, Online Resource 3). When comparing the panels in Fig. 3, we see a visual representation of the expectations of students met upon completing the workshop.

Fig. 3
figure 3

Word cloud from open-ended responses to the pre-workshop, “What are your expectations for this workshop?” and post-in-person-workshop, “What were the strengths of this workshop?” The larger the word, the more prevalent the word was per total word count.

Small Groups and Office Hours

The participants were surveyed to gauge the effectiveness of our small groups' work. The data shows good outcomes, but due to the small group size, the results do not allow robust interpretation (n=4). We will explore this dimension in our future cohorts.

In-Person Workshop in Montana

This experience culminated in an in-person workshop in Montana where we started with capstone presentations, and the student participants presented the project's computer science background. We had participants in person and online via Zoom (Fig. 4). It was easy to portray the skill information via Zoom since most of the participation was through the short courses.

Fig. 4
figure 4

Pictures of the in-person Workshop in Montana.

A total of 20 participants (of all DDMD roles, from expert to student) in the mini-capstones responded to a survey. We asked participants the overall quality rating they would give to the workshop. Independent of DDMD role, the mean quality rating was 9.26 and no participant responded lower than 7.

Our October 19th in-person workshop had five learning objectives, as seen in Table 3. Participants were asked to rate to what extent (1 to 5, 1 = not at all, 2 = limited extent, 3 = moderate extent, 4 = substantial extent, and 5 = exceptional extent) they felt each objective was achieved. Mean ratings and frequency distributions of recorded ratings for each of the five objectives are shown in the following table (Table 3).

Table 3 Extent of the attainment of workshop learning objectives

Respondents rated the quality (1 to 4, 1 = low quality, and 4 = high quality) of four workshop facets; mean ratings and numbers of recorded ratings are displayed in Table 4. “I didn’t do this” was an available response that two respondents selected for “consultations with experts.”

Table 4 Workshop quality

The workshop's value was also recognized in terms of understanding the role of machine learning in addressing diverse challenges and the opportunity for one-on-one and group discussions. Based on participants’ views on the strength of this workshop, we accomplished our goal of promoting collaboration and the students’ skillset in interdisciplinary convergence science.

Feedback on Workshop Improvement

Participants provided insightful suggestions for improving the workshop experience. Two respondents advocated for in-person attendance to address time management issues linked to virtual participation (Fig. 4). Another recommended increased engagement for online participants and improved audio/visual components to enhance in-person and virtual aspects. Four participants sought more time for team collaboration, discussions, and programming tasks. Other suggestions included adhering more closely to the agenda, offering greater background on projects and coding possibilities, improving presentation organization, and providing more meaningful data for machine learning training. Reducing the number of projects was proposed to enable deeper exploration using more robust datasets.

In a summative evaluation, 21 respondents retrospectively documented pre- and post-workshop ratings ranging from 1 to 4 (1 = low, 4 = high) concerning ‘their understanding of what it takes (workflow) to harvest data from a variety of sources to address a specific question.’ Retrospective pre-and-post surveys can prevent response shift bias, addressing the gap between before and after the workshop, and it has been used in other studies [38, 39].

A paired-dependent t-test was conducted to assess for pre/post mean differences at the 0.05 significance level, with the null hypothesis stating a difference = 0 and the alternative hypothesis stating a non-zero difference. We calculated the mean rating (MEAN), standard deviation (SD), matched pair-dependent t-statistic (t), p-value (p), the correlation between the matched pair ratings, and effect size [40], and these results are presented in Table 5. The interpretation of effect size offered by Cohen [41] is used in the evaluation report 0.8 for large effect size, 0.5 for medium, and 0.2 for small effect size [41]. The increase in mean ratings from pre- to post-workshop was statistically significant (t = 3.35, p < 0.0032), with a medium effect size (0.72). This analysis examined participants’ perspectives on their understanding of what it takes to harvest data from a variety of sources to address a specific question pre- and post-workshop (Fig. 5).

Table 5 Dependent t-test statistics (N = 21)
Fig. 5
figure 5

Participants’ view of their understanding of what it takes to harvest data from a variety of sources to address a specific question utilizing a retrospective survey. The deep red bars represent pre-workshop ratings and the black bars represent post-workshop ratings

Final Thoughts

Our DDMD Advanced Data Science Workshop convergence research framework successfully produced multiple artifacts [24,25,26,27,28,29,30,31,32,33,34,35,36,37]. Three students without an initial data science background have since graduated and joined bioinformatics laboratories. A retrospective view of the participants’ views of what they gained showed improved skills and knowledge. For the consideration of others, we provide perspectives on the limitations and challenges, strengths, and suggested changes for future implementations.

Limitations and Challenges

This pilot workshop has encountered its own unique set of limitations and challenges. The limitations of this framework included small sample sizes, the effect of duration on workshop evaluation, and how we informally evaluate the success of the workshop based on participant success. Challenges that this and future implementations could encounter include communication difficulties across disciplines and confounded expectations for participants.

Very few participants responded to survey questions about office hours, leading to a small sample size for statistical analysis. To combat this issue in the next iteration, we aim to recruit enough participants to perform an extensive analysis of user feedback. Specifically, we will perform a power analysis and determine the target participants, which will allow us to invite more participants to the workshop.

In this multi-month (18-month) framework with multiple workshops, organization and proper analysis are crucial when considering implementation. We are limited in evaluating if this workshop enhanced participants’ understanding of the main question: ‘understanding what it takes to harvest data from a variety of sources to address a specific question.’ The long timeframe of this framework means that the workshop may not be the only contributing factor to this understanding. Future implementations include plans to improve the parameters analyzed to evaluate the workshop.

Evaluation of student success relied on informal project presentations during the final, all-day workshop. We assessed students’ preparedness for convergence science based on whether they successfully completed projects and could participate in the exchange of scientific ideas by presenting at conferences or publishing journal articles. We did not evaluate via engineering education concepts; specifically, we did not assess the student participants at each step of the engineering design process.

This pilot workshop incurred challenges during implementation, including receiving limited feedback from participants, time and project management issues, specifically with communicating expectations for the participants, and participation time management; for example, some participants did not engage in the short course modules. However, the framework offers the ability to expand the breadth and depth to which a research group can go, contingent upon team communication and participant upskilling. Success depends on collaborative efforts and skill development.

Strengths

The specific skills imperative to success in convergence research are communicating with others, regardless of skill and discipline. The three disciplines were biofilm engineering, material science, and data science. Our groups used data science, Artificial Intelligence and Machine Learning, to model material science and biofilms to discover various aspects of biointerface. The projects looked at how materials and biology interact. Any set of disciplines could utilize this framework, the most important aspect of which is that a problem must be solved for the disciplines to come together. Data science is the backbone that can bring the other disciplines together, but it takes collaborative teaming to solve the problem. A few possible combinations are biofilm and biomechanics and data science, or chemistry and biofilm and data science, or even biomechanics, materials science, and data science. For addressing explicitly BME problems, there are many possibilities because convergence research allows new answers to previously unknown processes.

The DDMD Advanced Data Science workshop had many strengths, according to student feedback and expert input. Participants felt that the gap between biology and computation was bridged by students presenting their opposing disciplines during the in-person workshop, promoting collaboration across disciplines and fostering communication and involvement by all participants. The work groups and office hours facilitated interaction between subject matter experts and data scientists. They emphasized the multidisciplinary nature of the workshop, providing high-quality instructions with active participant engagement, offering insights into various capstone projects, showcasing state-of-the-art technologies relevant to participants' future professions, and creating a comfortable and collaborative environment conducive to learning and questioning. The workshop's value was also recognized in terms of understanding the role of machine learning in addressing diverse challenges and the opportunity for one-on-one and group discussions. Based on participants’ views on the strength of this workshop, we accomplished our goal of promoting collaboration and enhancing students’ skillsets in interdisciplinary convergence science.

Changes for Future Implementation

According to feedback from students and experts, we will emphasize the importance of in-person participation in the all-day workshop. While allowing an online attendance option is convenient for flexibility and finding funds for students’ travel can be challenging, requiring in-person participation may enhance students’ involvement. In-person attendance would address time management issues linked to virtual participation. If implementations include online participants, we plan to perform more audio/visual testing of components to enhance in-person and virtual aspects and ensure participants can stay fully invested. Participants thought the workshop could improve with increased time for team collaboration, discussions, and programming tasks. For the next implementation of the framework, we can implement increasing time for the workshop components by increasing the number of office hours and the focused time set aside for students on the short course modules, so participants do not have to choose between the workshop and their current research or other homework and studies. Other changes we can implement are creating succinct schedules to follow, offering more background on the projects and coding possibilities, providing more meaningful data for the small assignments within the modules that will allow students to practice working with biological data, and ensuring datasets used are robust. Changes we will implement for the in-person workshop will include more time to work on presentations and allowing presentation slides, which would give participants more time to strengthen their communication skills via presenting to others from diverse backgrounds.

Outcomes

The outcome exceeded expectations: some projects earned slots for presentation at the international IEEE Bioinformatics conference in 2022 [24,25,26,27,28,29,30,31,32], yielding three machine learning models published in a journal [33,34,35,36,37]. Participants’ feedback affirmed this experience’s success, underlining the potential for integrating convergence research into the curriculum through problem-based learning. Moreover, the initiative yielded tangible results, with bioengineering graduate students trained in data science and engineering ultimately securing positions in the biotechnology sector.

Summary Process

Below are the steps and timeline to implement in undergraduate and graduate bioengineering studies, summer project planning, and completion. Our implementation took 18 months (Fig. 6, Table 6).

Fig. 6
figure 6

Gantt view of the DDMD advanced data science workshop framework

Table 6 Process steps

The success of our framework in fostering convergence science teams to address transdisciplinary challenges holds promise. Implementing this model in undergraduate and graduate studies could shape the next generation of researchers. By championing collaboration, problem-solving, and integration of convergent research methodologies, we pave the way to bridge knowledge gaps, prepare our students for convergence science, and propel scientific innovation.

Supplementary Materials

Online Resource 1 – Example Mini-Capstone Project and Code. This DOI contains a mini-capstone practicum titled “Prediction of essential genes using machine learning model.pdf,” a Jupyter notebook file titled “Essential Gene Prediction Process.ipynb,” and the toy dataset with files: "afu_Archaeoglobus_fulgidus_DSM_4304.xlsx, apo_Archaeoglobus_profundus.xlsx, ast_Archaeoglobus_sulfaticallidus.xlsx, dde_Desulfovibrio_alaskensis.xlsx, dvu_Desulfovibrio_vulgaris_Hildenborough.xlsx, and subgrouplist.xlsx,” accessible via https://doi.org/10.6084/m9.figshare.24919599.

Online Resource 2 – Evaluation Survey Form and Raw Data. This DOI contains the post-workshop survey and the date collected; it contains files "S2_a Sept_pre-assessment_data.xlsx, S2_b Oct_workshop_survey.docx, S2_c Oct_in-person-workshop_data.xlsx," accessible via https://doi.org/10.6084/m9.figshare.24993771.

Online Resource 3 – Word Cloud Dataset and Code. This DOI contains the files that allow readers to implement a Jupyter Notebook to create a word cloud based on survey responses. These files are "S3_a readme, S3_b_SurveyWordCloud.ipynb, S3_c1_preSurvey.txt, and S3_c2_postSurvey.txt."accessible via https://doi.org/10.6084/m9.figshare.24919743.