Abstract
After completing an introductory biostatistics course, public health students often need to take one or more followon courses focusing on specialized areas of biostatistics. While there exists decades’ worth of pedagogical research on teaching introductory statistics to nonstatistics majors, few systematic attempts have been made to explore innovative ways to teaching followon courses to nonstatistics majors such as public health students. Extending previous research on teaching categorical data analysis to public health students, this paper explores ways to harness the power of computational thinking in teaching conceptual knowledge in a followon course on longitudinal data analysis. The proposed approach aims to keep students in their zone of proximal development by using computational experiments as a tool for developing understanding of conceptual knowledge. Learning activities center on experiments that explore the likelihood function. Illustrative examples of actual student work are used to demonstrate a practical way of integrating computational thinking into biostatistics course content.
1 Introduction
Biostatistics education at public health schools is often dogged by a pressing question about how to balance conceptual understanding and procedural skills. The distinction between conceptual understanding and mere knowledge of procedures has been thrown to sharp relief time and again in the realm of introductory statistics courses designed for nonstatistics majors [3, 25, 28, 29, 34]. As a result, the traditional approach of bygone days that focused excessively on skills in the use of cutanddried statistical procedures has largely given way to a variety of new approaches that emphasized deep understanding of statistical concepts [1, 25, 34]. In the past few years, the pedagogical reform movement in the teaching of introductory statistics has accelerated considerably [4, 14, 31, 38], partly due to the recent convergence of statistics and data science [20].
In contrast, few comparable systematic attempts have been made to explore effective ways of teaching to the same audience followon biostatistics courses that focus on a particular branch of statistics, e.g., categorical data analysis, although recent years saw encouraging attempts to find innovative ways to teach to nonstatistics majors advanced topics that were normally covered by followon statistics courses. Among such advanced topics are mixedeffects model [12], principal component analysis [11], and cluster analysis [5]. Public health students enroll in followon biostatistics courses either to fulfil their degree requirements or solely to increase their research competency by taking followon courses as electives. There is an emerging consensus that training for public health researchers and practitioners should include statistical methods beyond those covered by today’s introductory biostatistics courses [21]. The relative lack of scholarly research on pedagogical methods for followon statistics courses poses urgent challenges to public health schools.
The question of how to effectively teach conceptual knowledge in followon biostatistics courses is inextricably tied to that of whether statistics should be taught as a branch of mathematics. Whereas the answer might be debatable in the broad context of statistics education [8], in the present context the answer is that it should be taught as statistics to public health students. One important determining factor is the wide variety of backgrounds of public health students. Simpson [36] was among the earliest statistics educators who recognized the wide variety of undergraduate backgrounds as a pedagogically important characteristic of public health graduate students. She delineated the spectrum of these students’ mathematical ability as ranging from “hated maths at school and avoided it ever since” to “enjoy maths.” This frustrating picture painted by Simpson nearly 30 years ago remains mostly unaltered at today’s public health schools. As a result, a major portion of graduate degreeseeking students at today’s public health schools lack calculus based mathematical training, and they are unaccustomed to mathematical reasoning. In a recent example of teaching statistics as statistics to public health students, the instructor weaved high school algebra and conceptdriven computing exercises to help students digest conceptual knowledge [48]. The goal of such a course is to use mathematics as an effective tool to improve students’ conceptual understanding of statistics, but it does not specifically aim to elevate students to a higher level of mathematical sophistication.
In principle, the central aim of a followon biostatistics course is the same as that of an introductory course. As reiterated recently by Conway IV et al. [9], the instructor should aim at helping students “build procedural fluency from conceptual understanding.” However, in practice, the instructor inevitably faces challenges posed by the often far more complex subject matter. The recent work of Cai & Wang [5] illustrates the nature of such a challenge. These researchers successfully avoided formal mathematical formulation in teaching the basic principle undergirding a clustering algorithm. They used a square bulletin board and pushpins of assorted colors to allow their students to grasp the essence of the algorithm. However, such ingenious pedagogical methods often result from a combination of diligent research and serendipity, and hence cannot be easily generalized to other topics. But a computational approach reported in a recent study [48] can be more easily generalized as a dominant pedagogical tool in developing some followon biostatistics courses.
In this paper I show the feasibility of adapting this approach to teach longitudinal data analysis to public health students. Longitudinal studies play an eminent role in epidemiology [6], but studying the subject of longitudinal data analysis is among the most challenging tasks for public health students. To make the original approach more suitable for teaching longitudinal data analysis to public health students, I will weave computational thinking (CT) into a battery of written or digital artifacts (as Wass et al. [43, p. 321] would call them) to aid students in learning longitudinal data analysis. CT has been identified as a fundamental skill comparable to reading, writing and arithmetic in a child’s education [44], and is now regarded as a foundational competency for being an informed citizen of the 21st century [17]. Moreover, CT is believed by some to have the potential to “help create a mindset that empowers students to simultaneously think both statistically and computationally.” [20] The resulting method reported here can be viewed as an example of “CTinbiostatisticslearning,” a special case of what Grover & Pea [17] called “CTinSTEMlearning.”
2 Background characteristics of students
The longitudinal data analysis course at my institution was tailor for its MPH (master of public health) and DrPH (doctor of public health sciences) students. The MPH program offers six concentrations, including environmental health, epidemiology and biostatistics, and the DrPH program offers three concentrations. The course’s goal is to train these students as informed consumers of biostatistics, in contrast to training students as innovators of statistics. Thus, a motto of the course is: “promote procedural fluency buttressed by a sound conceptual understanding.” To achieve that goal, the instructor must cope effectively with diverse student backgrounds by decoupling conceptual knowledge from higher mathematics. This has been a dominant theme in research on the teaching of introductory statistics courses [39], but the problem is accentuated in a followon course by the considerably more challenging subject matter, as exemplified by Cai & Wang [5] and by Zheng [48]. The following is a representative cross section of students who participated in my past longitudinal data analysis classes, presented here to help the reader appreciate the rationale behind the proposed pedagogy.
On the surface, student I was far from ready to take a conceptcentric biostatistics course. As an undergraduate, she majored in Nursing. She enrolled in my longitudinal data analysis class as a secondyear MPH student in epidemiology. She did not take any mathematics courses beyond high school, but she took an elective SAS programming course in the preceding semester. SAS [32], widely used in public health research, is a comprehensive statistical software suite, and students taking this elective programming course learn basic programming knowledge by way of writing code in SAS’s unique script language. Student I may seem to fall at the lower extreme of the spectrum, but participants with similar backgrounds were not uncommon. For example, student F had an undergraduate degree in molecular biology. She took an algebra course and a trigonometry course as an undergraduate. Before enrolling in my longitudinal data analysis class, she took a data management course and a categorical data analysis course. The data management course, offered by my school, was her first introduction to computing, where she learned basic concepts about database and code writing. She improved her SAS coding skills while taking my categorical data analysis course, as that course weaves SAS coding into learning conceptual statistical knowledge [48].
At the other end of the spectrum were students who had partially completed an introductory calculus sequence, and who also had some coding experience before coming to my school. For example, student S majored in biology as an undergraduate. She took AP Calculus I in high school, and then took Calculus II for the life sciences in college. In her undergraduate days, she learned the R language via a statistics course, and acquired a rudimentary knowledge of the Python language via an introductory computer science course.
A sizeable portion of students had work experience before enrolling in my school, and some continued to work while pursuing a public health degree via my school’s online degree program [47]. One such example is Student C, who worked as a staff at a government health agency while taking my longitudinal data analysis course. She majored in public health as an undergraduate. Her quantitative skills came from taking a statistics course and an epidemiology course as part of her public health curriculum, and from taking a chemistry course that involved mathematics. In addition, she learned the basics of a popular database tool called SQL and learned to use another major software package for statistical analyses in her work.
3 Exploring constructivism and the zone of proximal development
Education research in the last two decades or so has repeatedly pointed to an idea that was often couched in the language of constructivism [13, 15, 26]. Two aspects of the constructivism theory are particularly relevant to the present study. First, conceptual knowledge cannot simply be dispensed as if it were as obvious a fact as the triangle inequality in elementary geometry, because students must engage in a meaningmaking process to assimilate the knowledge. Second, the meaningmaking process, aka knowledge elaboration, must be carefully geared to a student’s prior knowledge. As von Glaserfeld [16] put it, “concepts cannot be simply transferred from teachers to students—they have to be conceived.” While this claim may appear new and refreshing to some, observations of this kind can be traced to much earlier investigators. For instance, in the 1930s, the pioneering psychologist Vygotsky [41, p. 170] made the following claim in the context of child psychology.
... pedagogical experience demonstrates that direct instruction in concepts is impossible. It is pedagogically fruitless. The teacher who attempts to use this approach achieves nothing but a mindless learning of words, an empty verbalism that simulates or imitates the presence of concepts in the child. Under these conditions, the child learns not the concept but the word, and this word is taken over by the child through memory rather than thought. Such knowledge turns out to be inadequate in any meaningful application. This mode of instruction is the basic defect of the purely scholastic verbal modes of teaching which have been universally condemned. It substitutes the learning of dead and empty verbal schemes for the mastery of living knowledge.
The above quote is not meant to draw a parallel between public health students learning biostatistical concepts and school children learning words that designate abstract concepts. However, this quote is likely to resonate with many biostatistics instructors at public health schools who attempted to explain abstract concepts such as the maximum likelihood estimate (MLE) of an odds ratio.
One solution to this pedagogical problem lies in the theory of constructivism. Schmidt [33] explained the relevance of constructivism from an information processing perspective. The information processing theory identifies three principles that are essential for successful learning of new knowledge: activation of relevant prior knowledge, provision of a context resembling situations in which the new knowledge will be applied (which is dubbed encoding specificity), and stimulation of knowledge elaboration. The concept of knowledge elaboration in statistics is almost as old as statistics itself. In any PhD statistics curriculum, proofs of theorems and derivations of formulas are a recurring theme. The reasons for teaching proofs and derivations are not merely to develop students′ theoretical research ability, as the proofs and derivations offer students ample opportunities to elaborate on the conceptual knowledge underpinning the theorems and formulas.
Considering the disparate backgrounds of public health students as sampled in the preceding section, a proofbased approach is out of reach for most public health students. An alternative way to foster knowledge elaboration is desirable. For example, Shillam et al. [34] relied on the use of realworld data and technology to help pharmacy students to develop conceptual understanding in an online introductory biostatistics course. Using realworld data is conducive to honing students’ ability to apply as well as to understand biostatistics. However, from the perspective of the information processing approach, the method of Shillam et al. aims more at encoding specificity than at knowledge elaboration. Learning to apply readymade statistical procedures to realworld data is not always an effective way to stimulate knowledge elaboration. For instance, the statistical concept of the deviance would be hard for public health students to grasp if the instructor merely shows them realworld data examples. As demonstrated recently [48], a handson, computational approach gave students a genuine opportunity to elaborate on that concept.
This paper offers a similar computational route for teaching longitudinal data analysis to public health students. The main impetus for extending this novel approach to a new biostatistics course is that the computational approach provides a level playing field for all students with disparate mathematical readiness. The reasons for this advantage of the new method come to the fore when one views the students’ backgrounds from the perspective of the zone of proximal development (ZPD). The concept of the ZPD was proposed by Vygotsky [40, p. 84] to underscore the distinction between a child’s actual development level and her potential development level. In the context of child psychology, a child’s actual development level indicates the difficulty level of tasks that she can accomplish independently, while a child’s potential development level indicates the difficulty level of tasks that she can accomplish with a tutor’s assistance. Therefore, a child’s potential development level refers to her next level of performance achievable with a tutor’s assistance, which may not be related to her longterm or lifetime potential. The concept of the ZPD has long been used fruitfully in primary and secondary education, but its application in higher education is a relatively new phenomenon. One reason for this unfortunate delay was given by Wass & Golding [42]: “Teachers in higher education often do not have formal training as teachers and, therefore, have rarely been exposed to the ZPD as a theory.” Wass & Golding [42] used the ZPD successfully to facilitate the teaching of critical thinking in zoology. Murphy, [30, Chap. 9] expounded on how to explore the enormous potential of the ZPD in higher education.
As can be seen from the cross section of backgrounds of students given in the preceding section, using a theoremproof approach to elaborate on conceptual knowledge is impractical for most students, because this sort of skill is not close enough to their attained mathematical levels, that is, it falls outside their ZPD. On the contrary, the computational approach exemplified in recent works by Zheng [47,48,49] is appreciably closer to most students’ ZPD. There are several contributing factors. First, public health schools are putting an increasing emphasis on computing literacy education in recent years. As a result, students with scant computing experience acquire basic computing skills by the end of their first academic year. My school’s data management course and SAS programming course allow students such as Student I to transition to a higher computer coding level that my longitudinal data analysis course requires. Second, society’s increasing reliance on data science propels students (either before or after enrolling in public health schools) to seek opportunities to enhance their computing skills, either to improve job performance (e.g. Student C’ effort to learn SQL coding) or for selfimprovement (e.g. Student I’s effort to acquire a SAS certificate). These and other factors conspire to equalize the effect of prior computing knowledge on learning conceptual statistical knowledge, but there are no comparable mechanisms equalizing the effect of students’ prior mathematical knowledge.
Another factor that makes the proposed approach feasible is that it leverages imitative activity to advantage. The integral role of imitation in learning within the ZPD is long known. As Chaiklin [7] put it, the concept of the ZPD was constructed around Vygotsky’s technical concept of imitation. To help flatten the learning curve, instructors should view computing and coding as a means to catalyze students’ knowledge elaboration process, and hence acquisition of sophisticated coding skills is not the primary focus. Most of the computing exercises are imitative in nature as far as coding ability is concerned, which allows students to focus on recreating meaning of statistical concepts embedded in the computing exercises. As will be made clear by examples in the ensuing section, these imitative problems can hardly be solved by mindlessly copying tutorial examples, as students must develop a degree of understanding in the process of solving such a problem.
With the ZPD thus identified, focus should now be shifted to construction of computing tasks that serve as scaffolding to help students assimilate conceptual knowledge that would otherwise be beyond their reach. Scaffolding is a helpful metaphor proposed by Wood et al. [45] and is now widely used in the ZPD theory. Construction of scaffolding for a specific content area is a daunting challenge faced by instructors as they are the ultimate implementers and testers of any learning theory.
4 A computational thinking approach in detail
As a detailed case study, this section offers concrete examples from several categories of problems whose conceptual understanding would be outside a typical students’ ZPD without welldesigned computational exercises acting as stepping stones. The learning process is further eased by concentrating attention on cases in which the outcome variables are continuous. Only after students have developed a descent conceptual understanding of key ideas encountered in continuous outcome variable cases does the focus shift to the more popular cases of categorical outcome variables. As a consequence, the multivariate normal distribution is the first hurdle for students to overcome. The multivariate normal distribution should serve as an efficient vehicle for imparting the fundamental concept of the likelihood principle, which is a gateway to enabling students’ active participation in knowledge elaboration.
4.1 The normal distribution is the pedagogical backbone
The multivariate normal density function should be introduced in a heuristic manner. The univariate normal density along with its bellshaped density curve is used as a basis to draw analogies. Facts about vectors and matrices are discussed only on a needtoknow basis. For example, the concepts of determinants and inverses are defined only intuitively by concrete numerical examples, as students only need to appreciate the fact that the normal density functions transforms an arbitrary point in the two or higherdimension space to a positive number, just as the univariate normal density function does in the onedimensional space. To help students assimilate this basic idea, I asked students to compare the analytic expression
with the following SAS implementation:
Students then use this implementation to get a feel for the density function by numerical exercises. For example, they are give a point that leads to a value larger than unity, and are asked to think why this does not violate the assumption of the total probability being unity. They are also given two points, one being closer to the mean vector than the other; and they are asked to numerically verify that the point “nearer” to the mean vector should give a larger value.
4.2 The likelihood function is key to knowledge elaboration
As shown previously [47, 48], the likelihood function enables students to concretize a number of important abstract concepts, allowing them to develop a deep understanding of rather mathematical ideas via handson, intuitive computational exercises. With a newly acquired working knowledge of the multivariate normal density function, students now are in a position to elaborate on the likelihood principle in a longitudinal data setting. As a warmup exercise for mainstream models, students are asked to fit a normal model to the following realworld data.
Rat  Week  

1  2  3  4  5  
1  61  86  109  120  129 
2  59  80  101  111  122 
3  53  79  100  106  133 
4  59  88  100  111  122 
5  51  75  101  123  140 
6  51  75  92  100  119 
7  56  78  95  103  108 
8  58  69  93  116  140 
9  46  61  78  90  107 
10  53  72  89  104  122 
These data are from a study reported by Box [2], and the weekly weights are recovered by adding the weekly weight increases as were done by Lindsey [24, p. 150]. Students are required to adopt the following parameterized mean vector:
and are also asked to assume a covariance matrix of the form \(\sigma ^2 \times A\) with A being a \(5\times 5\) AR(1) matrix.
A conventional approach might direct students’ attention to two procedural skills. The first is data reorganization (see, e.g., Hedeker & Gibbons [18, p. 32]) and the second is code writing for model fitting. Students can master these skills relatively quickly. The conventional approach would then focus on output interpretation as a finale to a student’s learning process. Output interpretation touches the surface of conceptual knowledge, but it is not bona fide knowledge elaboration. To deepen students understanding of the computer output, I asked students to code the loglikelihood function and then use their computer code to verify the MLEs of the model parameters produced by a reputable statistical package such as SAS [32]. As Fig. 1 shows, in the process of accomplishing this task, a student learns the precise meaning of the parameter estimates, sees how the likelihood principle works in practice, and deepens understanding of the multivariate normal density function. After successfully coding the likelihood function, students can further enhance their understanding of the likelihood function by visualizing it. To focus students’ attention on the essence of the likelihood principle, I designed a problem that allowed students to see how the likelihood varies with a particular parameter while keeping the other parameters fixed at their respective MLEs. Figure 1C by a student shows that the log likelihood reaches its maximum value indeed at \(\mu =\hat{\mu }\). In this sequence of computational exercises, students can learn several CT skills such as problem decomposition and debugging, but they acquire these skills by solving interesting statistical problems, and by studying worked inclass examples that focus on statistical concepts. This approach can help instructors avoid the pitfall of teaching CT in a manner disconnected from the disciplinary content that CT aims to serve [23].
4.3 Exploring incomplete data
The above computational exercise capitalizes on students’ innate curiosity and natural inclinations for handson activities. The satisfaction derived from verifying the likelihood function through conceptdriven computations can sustain students’ interest in learning conceptual knowledge through the semester and beyond. The following exercise piques students’ interest in exploring a new idea – the accommodation of incomplete data.
Textbooks often hail the capability to accommodate incomplete data as a distinctly attractive features of modern longitudinal data methods. Initially, students may be impressed, but soon excitement gives way to curiosity. This provides an opportunity for students to further elaborate on several key ideas. Hence, students are asked to revisit the foregoing computational exercise by assuming that the first rat lacks the second and fourth measurements and the second rat lacks the third measurement.
After studying an inclass example, students can see that the covariance matrix for the first rat should be proportional to the matrix
Similarly, they can handily write down the covariance matrix for the second rat. Students can then modify their code from the previous exercise to confirm the maximized log likelihood value. Like the foregoing exercise, this problem aims to induce students to construct knowledge for themselves, a practice falling into the domain of experiential learning. Students are unknowingly led to use several CT techniques such as abstraction, decomposion and generalization to compute the likelihood function. For example, coding the two covariance matrices (see a student’s work in Fig. 2A) and modifying the loop inside the function mylik (see a student’s work in Fig. 2B) hone students’ ability to identify patterns or similarities, and to adapt an existing algorithm to solve similar problems. In this learning process, students do not try to memorize any important facts related to the accommodation of incomplete data, but they are likely to end up unknowingly internalizing the knowledge.
4.4 Nurturing model building ability
The importance of teaching modeling to nonstatistics majors is increasingly appreciated by statistics educators [12, 38]. A followon course on longitudinal data analysis should foster among students a sense of model building as part of routine biostatistical practice, because readymade models do not appear as common in longitudinal data analysis as they appear in an introductory course. Models based on the multivariate normal distribution are an ideal starting point. The following strippeddown linear growth model was designed to help students see what a growth model could do in practice.
Students were asked to fit this model to the rat body weight data of the first two groups of rats from the same study mentioned earlier [2]. Note that the first 17 observations constitute the control group.
With their increasing understanding of the multivariate normal distribution, students can readily see that equation (2) amounts to an economical way of specifying the mean vector of a normal distribution (in contrast to the relatively more wasteful way of specifying it by model (1)). The need for creating a time variable and a treatment dummy was easily understood in the context of this simple model. The subsequent SAS syntax for model fitting can also be mastered with little effort. However, albeit more time consuming, coding the likelihood function of the model helps students internalized the likelihood principle and better appreciate results from model fitting routines. Figure 3 shows the learning process workflow from data organization to model fitting to coding the likelihood function and verifying the quantity \(2\log L\) (a symbol of SAS denoting the negative of twice the maximized likelihood).
4.5 Simulation facilitates elaboration on a random intercept logit model
As shown repeatedly by statistics educators [19, 27, 35], Monte Carlo simulation plays a unique role in statistics education. In the following example, simulation catalyzed students’ elaboration on an otherwise elusive concept related to a random intercept logit model.
The random intercept logit model may appear unfathomable to students, partly because the likelihood function is not representable by elementary functions. Reliance on a calculus based method at the level of Hedeker & Gibbons [18] is not conducive to generating understanding for most public health students, although the standard integral symbol can still be retained as a notational shorthand for weighted averaging operations. Simulation, a major component of CT, is a convenient vehicle for knowledge elaboration in the present context.
To help students use simulation as a tool for knowledge elaboration, in a video lecture I discuss the mental health study example in Hedeker & Gibbons [18, p.175] from a slightly different perspective, putting emphasis on the likelihood function. My discussion begins with the definition of the linear component
where \(v_{0i}\) are drawn from \(\text{ Normal }(0,\sigma _v^2)\). Because observation on the first subject is (1, 0, 0, 1), the contribution to the likelihood function by the first subject is
with \(\phi (\cdot )\) denoting a normal density function with mean zero and variance \(\sigma _v^2\). Students are asked to interpret the integral sign \(\int\) merely as taking weighted average according to a normal curve.
After studying this worked example, students explore a subset of the realworld data generated by the study of Sommer et al. [37]. This data set consisted of 1200 observations on 275 preschool Indonesian children. The outcome variable was presence or absence of respiratory infection. Students are asked to explore the logit model
where \(v_{i}\) is a subjectspecific random intercept. Definitions of the predictors in the above equation are given in Zeger & Karim [46, p.83]. For example, Xer is a dummy variable indicating whether a child had xerophthalmia, and Stunted is a dummy variable indicating whether the child’s height was below 85% height for the subject’s age. As in the previous examples, model fitting and output interpretation are relatively easy, and hence no students’ work needs showing here. The elaboration part is more engaging and more timeconsuming, and debugging may consume a considerable amount of time for some students. A student’s work in verifying the likelihood function using simulation is shown in Fig. 4.
Another challenge is how to teach the concept of marginalization. The analytic approximation method given by Hedeker & Gibbons [18, p.179] is somewhat opaque to most public health students. However, the essence of marginalization is simply about averaging a quantity over the whole population, and students can better appreciate this point by simulation. Hence, students are asked to use simulation to compute the following probabilities and compare the results with those obtained by the analytic approach: Let gender=1, stunted=1, sin=1, cos=0, xer=1 and height4age=0 and find the probabilities of infection for age=\(30, 20, \ldots , 20,30\). (Age was centered at 36 month.) Figure 5 shows a student’s work. Her results were quite similar to those she obtained by the analytic methods (not shown here).
5 Evidence of feasibility
One limitation of this study is the small number of students involved, as enrollment on a followon biostatistics course tends to be considerably lower than that on an introductory course. As a result, it is not possible so far to conduct a rigorous assessment of the new approach. Therefore, the reader should view the observations recounted here as the author’s personal experience, and the assertions are not necessarily supported by sufficient evidence that only a largescale experiment can provide. Still, evidence of feasibility of the new approach has emerged from two course evaluation surveys. Both surveys were managed by Texas A &M University Office of Institutional Effectiveness & Evaluation. Note that conventional intervention assessment tools cannot be used directly, because the new approach shifted attention from learning of knowledge in declarative or procedural form to elaboration of conceptual knowledge.
The 2020 survey had six respondents. Two items are particularly relevant. The first item is the statement “On the whole, this was a good course.” Students had five options: SA (strongly agree), A (agree), U (undecided), S (disagree) and DS (strongly disagree). Three students chose “SA,” two chose “A,” but one chose “D.” The second item is the statement “On the whole, the information learned in the course was valuable to me.” Four students chose “SA,” but two chose “U.” These results suggest that most students were receptive to the new teaching method, but some students needed more individualized help.
The 2021 survey adopted a new format. Three items threw light on the feasibility of the new approach. Item A was stated as follows. “This course helped me learn concepts or skills as stated in course objectives/outcome.” Students were asked to choose an integer between 1 and 4 according to the following criteria.

1.
This course did not help me learn the concepts or skills.

2.
This course only slightly helped me learn the concepts or skills.

3.
This course moderately helped me learn the concepts or skills.

4.
This course definitely helped me learn the concepts or skills.
Six students out of a class of 8 responded with four students choosing a “4” and one student choosing a “3.” The survey also reported an average score of 3.83, which is indicative of students’ favorable perception about the effectiveness of the course. If students were indifferent about the curse’s effectiveness, they would have responded randomly. Under this randomness assumption, a simple Monte Carlo test shows that Prob(an average score \(\ge 3.8333)\approx 0.0017\). That is, a statistical test about the course’s effectiveness yields an approximate p value of 0.002.
Item B was phrased as follows. “In this course, I engaged in critical thinking and/or problem solving”. Similarly, students had four options: 1: Never; 2: Seldom; 3: Often; 4: Frequently. Four students gave a “4” and two gave a “3” out of the same six respondents. The average score was 3.666. A similar Monte Carlo exercise gives an approximate p vale of 0.007. This is an encouraging indication of students’ active engagement in knowledge elaboration despite their disparate backgrounds.
Item C is perhaps of the greatest interest to those who may consider adapting the new approach in their own teaching. Item C went as follows: “The instructor’s teaching methods contributed to my learning.” Students could choose an integer between 1 and 3 to indicate one of the following three opinions. 1: Did not contribute; 2: Contributed a little; and 3: Contributed a lot. Five students chose a “3”, but one chose a “1.” Clearly, fives students welcomed the new teaching method, but the same teaching method failed to bring one student’s learning into the ZPD. Future research may explore ways to provide more individualized scaffolding to bring more students into the ZPD.
6 Discussion
Longitudinal data analysis is an important followon biostatistics course for public health students. This paper has shown that by shifting to a unique computational approach, the instructor can bring conceptual knowledge learning into most students’ ZPD. Once inside their ZPD, students learn conceptual knowledge through knowledge elaboration that is structured and aided by carefully devised computational exercises. This paper highlights an important role of CT in those handson computational exercises.
The potential of CT in education is widely recognized. However, as Czerkawski & Lyman [10] noted, incorporation of CT in higher education faced higher hurdles than in K12 education, because at college levels the use of CT in teaching and learning is highly dependent on the subject matter content. Integration of CT into curriculum content has been identified as an effective, synergistic approach to deepening students’ understanding of curriculum content, as well as to sharpening their CT skills [22]. The present work, along with previous works [47,48,49], exemplifies this idea in the context of biostatistics education of public health students. Injecting CT into a biostatistics course is less an objective than a means of catalyzing knowledge elaboration. The present work also exemplifies an oftoverlooked distinction, which some believed [10] was important to the promotion of computational thinking in higher education: applying CT skills is not the same as applying computers to data crunching. As shown in the foregoing examples, solving a problem (data crunching) could require far less computer coding than elaborating on the underlying key concepts.
As shown previously in a course on categorical data analysis [47, 48], the likelihood function can be extensively used as levers with which to facilitate knowledge elaboration. Even in the case of Generalized Estimating Equation (GEE), which is not reliant on the likelihood, students can develop a deeper appreciation of the strengths and shortcomings of the GEE approach when they have an intuitive grasp of the likelihood principle. From a statistical perspective, an emphasis on the likelihood function via a handson approach enables students to develop a deep understanding of the allimportant likelihood principle, which cannot be accomplished by merely examining an analytic model defining equation and trying to understand its verbal explanation by a textbook. From a CT perspective, reconstructing the likelihood function for a concrete problem via computer coding nurtures students’ inclination to think by way of models. The actual coding process provides students with ample opportunity to learn basic CT skills, such as problem decomposition, simulation and debugging. CT skills are intended solely as a vehicle for fostering knowledge elaboration, and students’ acquisition of CT skills is a byproduct of learning biostatistics. The pedagogic approach discussed here and elsewhere [48, 49] suggests a practical way of integrating CT training into public health biostatistics curricula.
Data availability
The only data not included in the text is the data used in the last example. This wellknown data set is available upon request.
References
Bland J. Teaching statistics to medical students using problembased learning: the Australian experience. BMC Med Educ. 2004;4:31.
Box G. Problems in the analysis of growth and wear curves. Biometrics. 1950;6(4):362–89.
Bradstreet T. Teaching introductory statistics courses so that nonstatisticians experience statistical thinking. Am Stat. 1996;50:69–78.
Brisbin A, do Mascimento EM. Reading versus doing: methods for teaching problemsolving in introductory statistics. J Stat Educ. 2019;27(3):154–70.
Cai X, Wang Q. Educational tool and activelearning class activity for teaching agglomerative hierarchical clustering. J Stat Data Sci Educ. 2020;28(3):280–8.
Caruana EJ, Roman M, HernándezSánchez J, Solli P. Longitudinal studies. J Thorac Dis. 2015;7(11):E537–40.
Chaiklin S. The zone of proximal development in Vygotsky’s analysis of learning and instruction. In: Kozulin A, Gindis B, Ageyev VS, Miller SM, editors. Vygotsky’s educational theory in cultural context. Cambridge University Press; 2003. p. 39–64.
Cobb GW, Moore DS. Mathematics, statistics, and teaching. Am Math Mon. 1997;104:801–23.
Conway B IV, Martin WG, Strutchens M, Kraska M, Huang H. The statistical reasoning learning environment: a comparison of students’ statistical reasoning ability. J Stat Data Sci Educ. 2019;27(3):171–87.
Czerkawski BC, Lyman EW. Exploring issues about computational thinking in higher education. TechTrends. 2015;59(2):57–65.
Di lorio J, Vantini S. How to get away with statistics: gamification of multivariate statistics. J Stat Data Sci Educ. 2021;29(3):241–50.
Evans C. Regression, transformations, and mixedeffects with marine bryozoans. J Stat Data Sci Educ. 2022;30(2):198–206.
GallardoAlba C, Grüning B, SerranoSolano B. A constructivistbased proposal for bioinformatics teaching practices during lockdown. PLoS Comput Biol. 2021;17(5): e1008922.
Gerbing DW. Enhancement of the commandline environment for use in introductory statistics course and beyond. J Stat Data Sci Educ. 2021;29(3):251–6.
Gijbels D, van der Watering G, Dochy F, van der Bossche P. New learning environments and constructivism: the students’ perspective. Instr Sci. 2006;34:213–26.
von Glaserfeld E. A constructivist approach to teaching. In: Steffe L, Gale J, editors. Constructivism in education. The University of Georgia, Atlanta: Erlbaum; 1995. p. 3–16.
Grover S, Pea R. Computational thinking: a competency whose time has come. In: Sentance S, Barendsen E, Schulte C, editors. Computer science education perspective on teaching and learning in school. Bloomsbury Academic; 2018. p. 19–38.
Hedeker D, Gibbons RD. Longitudinal data analysis. Wiley; 2006.
Holman JO, Hacherl A. Teaching Monte Carlo simulation with Python. J Stat Data Sci Educ. 2022. https://doi.org/10.1080/26939169.2022.2111008.
Horton NJ, Hardin JS. Integrating computing in the statistics and data science curriculum: creative structures, novel skills and habits, and ways to teach computational thinking. J Stat Data Sci Educ. 2021;29(1):S1–3.
Karran JC, Moodie EEM, Wallace MP. Statistical method use in public health research. Scand J Public Health. 2015;43:776–82.
Kite V, Park S, Wiebe E. The codecentric nature of computational thinking education: a review of trends and issues in computational thinking education research. SAGE Open AprilJune. 2021; 2021. https://doi.org/10.1177/21582440211016418.
Lee I, MalynSmith J. Computational thinking integration patterns along the framework defining computational thinking from a disciplinary perspective. J Sci Educ Technol. 2020;299:9–18.
Lindsey J. Models for repeated measurements. Oxford University Press; 1999.
Loux T, Varner S, VanNatta M. Flipping an introductory biostatistics course: a case study of student attitudes and confidence. J Stat Educ. 2016;24:1–7.
Loyens SMM, Rikers RMJP, Schmidt HG. Relationships between students’ conceptions of constructivist learning and their regulation and processing strategies. Instr Sci. 2008;36:445–62.
Marasinghe MG, Meeker WQ, Cook D, Shin T. Using graphics and simulation to teach statistical concepts. Am Stat. 1996;50(4):342–51.
McGraw JB, Chandler JL. Flipping the biostatistics classroom, with a twist. Bull Ecol Soc Am. 2015;96(2):375–83.
McLaughlin J, Kang I. A flipped classroom model for a biostatistics short course. Stat Educ Res J. 2017;16(2):441–53.
Murphy C. Vygotsky and science education. Springer; 2022.
Reinhart A, Evans C, Luby A, Orellana J, Meyer M, Wieczorek J, Elliot P, Burckhardt P, Nugent R. Thinkaloud interviews: a tool for exploring student statistical reasoning. J Stat Data Sci Educ. 2022;30(2):100–13.
SAS Institute Inc., SAS/STAT software, version 9.4. Cary, North Carolina; 2016.
Schmidt HG. Problembased learning: rationale and description. Med Educ. 1983;11:11–6.
Shillam CR, Ho G, CommoodoreMensah Y. Online biostatistics: evidencebased curriculum for master’s nursing education. J Nurs Educ. 2014;53(4):229–32.
Sigal MJ, Chalmers RP. Play it again: teaching statistics with Monte Carlo simulation. J Stat Educ. 2016;24(3):136–56.
Simpson JM. Teaching statistics to nonspecialists. Stat Med. 1995;14:199–208.
Sommer A, Katz J, Tarwotjo I. Increased risk of respiratory disease and diarrhea in children with preexisting mild vitamin A deficiency. Am J Clin Nutr. 1984;40:1090–5.
Son JY, Blake AB, Fries L, Stigler JW. Modeling first: applying learning science to the teaching of introductory statistics. J Stat Data Sci Educ. 2021;29(1):4–21.
Vinje H, Brovold H, Almøy T, Frøslie KF, Sæbø S. Adapting statistics education to a cognitively heterogeneous student population. J Stat Data Sci Educ. 2021;29(2):183–91.
Vygotsky LS. Mind in society: the development of higher psychological processes. Cambridge, MA: Harvard University Press; 1978.
Vygotsky LS. Thinking and speech. In: Rieber RW, Carton AS, editors. The collected works of L.S. Vygotsky: problems of general psychology, vol. 1. New York: Plenum Press; 1987.
Wass R, Golding C. Sharpening a tool for teaching: the zone of proximal development. Teach High Educ. 2014;19(6):671–84.
Wass R, Harland T, Mercer A. Scaffolding critical thinking in the zone of proximal development. Higher Educ Res Dev. 2011;30:317–28.
Wing JM. Computational thinking. Commun ACM. 2006;49(3):33–5.
Wood D, Bruner JS, Ross G. The role of tutoring in problem solving. J Child Psychol Psychiatry. 1976;17:89–100.
Zeger SL, Karim MR. Generalized linear models with random effects: a Gibbs sampling approach. J Am Stat Assoc. 1991;86:79–86.
Zheng Q. Improving the teaching of biostatistics in an online master degree program in epidemiology. In: Proceedings of the 5th international conference on distance education and learning. Association for Computing Machinery; 2020. p. 89–93.
Zheng Q. Let master of public health students experience statistical reasoning. Athens J Health Med Sci. 2020;7(1):47–62.
Zheng Q. Let computational thinking permeate biostatistics education of public health students. In: Proceedings of the 6th international conference on distance education and learning. Association for Computing Machinery; 2021. p. 283–8.
Acknowledgements
I am indebted to three erudite reviewers whose detailed comments helped me put the exposition on a firmer theoretical footing and in a more accessible style.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zheng, Q. Integrating computational thinking into a longitudinal data analysis course for public health students. Discov Educ 1, 15 (2022). https://doi.org/10.1007/s4421702200015w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s4421702200015w
Keywords
 Computational thinking
 Zone of proximal development
 Knowledge elaboration
 Likelihood function