1 Let’s just work together! Paper, laptop, and tablet as equally effective tools for groupwork in college

Digital resources have become a mainstay in postsecondary education. College students today often bring multiple screens with them to campus, with smartphones and laptops being most common. Galanek et al. [1] stated that 95% of undergraduate students have access to smartphones, 91% to laptops, and about 40% to tablets. Laptop and smartphone ownership are regularly over 90% [2]. By digitally accessing books, readings, and other course materials, students can save on printing costs, avoid carrying around significant weight in paper, and instantly share materials via online collaborative tools. While research on technology in education has been expanding, such as in the area of reading comprehension (see relevant meta-analyses: [3, 4]), research specifically on utilizing technological tools in a brief, in-person, small group academic context appears limited.

In the present study, we investigated paper, laptop, and tablet resources more closely in this collaborative group context. Collaboration can be considered a combination of coordination and cooperation—students need to coordinate on joint goals and cooperate in a way that leads to shared understanding [5]. Devices vary in their affordances and may influence a collaborative group setting in distinct ways. Paper is often the tried-and-true resource for group activities. However, print materials can present challenges such as needing to print in advance (multiple copies if not wanting to share) and being torn or wrinkled as passed back-and-forth or placed into a backpack. Laptops have been well studied for education and are owned by most college students [2, 3]. Students may have had computer classes throughout their schooling, leading to high familiarity and prior experience. However, laptops are also heavy to carry and pass around and have a sensitive interface with the built-in mouse touchpad and scrollbar. Tablets are less well examined and owned by fewer college students [2, 6, 7]. Yet they seem to have significant potential in remedying some of the challenges of paper or laptops. They cannot be torn, are lightweight, and come equipped with user-friendly touchscreen interfaces and/or pencil accessories. Evidence is emerging that they may aid academic performance in such contexts as within a music classroom [8]. Groupwork can also be aided or complicated depending on the number of devices available. Sharing one device may be more challenging, especially as group size increases, given coordination within a limited physical space [6]. That said, all students having an individual device can also be challenging in terms of keeping everyone simultaneously focused on the same goal and task.

Our research seeks to explore these dynamics to provide recommendations to educators regarding what students should bring to a small group context to ease collaboration. Groupwork has significant potential to enhance our learning environment; much research suggests that groupwork and collaboration help students engage with material and develop a deeper understanding of the content [9, 10]. However, many college students feel that groupwork can be tedious [11]. When compared to individual work, unique obstacles emerge including how to best share materials in the moment, divvy out the work, and prevent social loafing. Investigating how different devices might facilitate or impede groupwork thus seems an important endeavor.

In this introduction, we will first review the past literature on computer-supported collaborative learning in general, and then discuss tablets and the in-person, small group context in specific. We will next introduce relevant theoretical frameworks and provide context on our current study and contributions. With technology rapidly advancing and college students coming equipped to campuses with these devices already in-hand, such research is imperative.

2 Computer-supported collaborative learning

Significant research exists on computer-supported collaborative learning (CSCL). It has long been understood that cognitive growth can occur in a social context [12]. With the rise of computers, and now mobile technology, this social context can be infused with digital devices to aid collaboration and knowledge co-construction. CSCL research has unfolded across many different contexts, from online courses to specific technologies to particular design features [13]. However, CSCL research has not typically focused on our current context of small groups working together on brief, in-person tasks.

Regardless, we can take away key lessons from the CSCL body of literature. Digital devices can support collaborative learning if the affordances of the device are advantageous to the setting [13]. There may be a goodness-of-fit match or discrepancy between the device and the group context. There are also risks like distraction that must be overcome [14]. That said, meta-analyses support overall positive effects of infusing technology into collaborative activities. Chen et al. [15] reviewed 425 empirical studies on CSCL. In collaborative learning activities, they found positive effects of computer use on knowledge gain, skill acquisition, task performance, interaction, and student perceptions. Talan [16] conducted a meta-analysis on 40 studies linking CSCL to academic achievement, finding a moderate positive effect. The author cautioned readers though, mentioning how different CSCL applications, digital tools, and learning tasks may all lead to variable outcomes.

A more recent trend in CSCL has been mCSCL or mobile-computer-supported collaborative learning. Mobile devices offer additional convenience and connectivity when compared to more stationary technologies. As per a meta-analysis by [17], “although mobile devices have become valuable collaborative learning tools, evaluative evidence for their substantial contributions to collaborative learning is still scarce” (p.768). Their meta-analysis of 48 journal articles concluded that mobile CSCL did meaningfully improve collaborative learning. They commented on how mobile devices allow for portability, active participation, and efficiency. Like the prior meta-analysis though, the authors warned of variability based on moderating factors like subject matter, group size, and duration.

3 Tablets and the group context

We previously presented work focused on computers and collaborative contexts. Our present research explores a newer mobile device in tablets. Literature specifically on tablets for in-person, small group collaboration appears scant. Dhir et al. [6] reviewed iPad learning literature, emphasizing that “some educational researchers and teachers have supported the iPad’s use for collaborative groupwork but limited scientific studies are available to actually examine this” (p.716).

Pearson et al. [18] did provide commentary on iPad use in this type of context, specifically for collaborative reading. They mentioned that the goal in this endeavor would be to essentially mimic paper-based tools but also capitalize upon the benefits of a single device. iPads have unique benefits including apps that act like clipboards/pads (they can feel like physical paper/pen while automatically saving work), high portability, easy touchscreen interaction, and automatic scaling to the document size and shape. Kim et al. [19] looked more broadly at a college-wide tablet program and noted how tablets with collaborative software can enhance peer and faculty-student interactions. Students in this research commented on how the tablet can help in sharing materials, virtual whitespace, and explaining concepts.

Relatedly, [20] examined the use of iPads in groupwork for undergraduate nursing students. Students had one iPad and worked together in groups of three to complete a series of seminar or tutorial activities that ended with a brief presentation. They collaborated on multiple occasions and were assessed at the midpoint and endpoint of classes. Based on the qualitative interim data, the iPads resulted in positive outcomes for content engagement, presentation skills, opportunities to later return to materials, and overcoming fears of technology. Negative remarks included the need to share devices amongst group members and the presentations themselves inducing anxiety. Based on the quantitative final data, an overwhelming 85% of learners felt that the iPad had helped with engagement, interactivity, and presentation skills. That said, there was no comparison to another device like a computer. It is possible that it was just a fun and new approach for the class.

Similarly, [21] investigated undergraduate students’ use of iPads for collaborative learning in a biology course. Students, in groups of three to five, used one iPad and an Apple Pencil in conjunction with an electronic whiteboard during collaborative study sessions across the semester. To assess outcomes, researchers graded the note materials produced during these sessions as well as an oral presentation at the end of the term. The results demonstrated that, relative to a control group which did not use iPads for collaboration, student knowledge integration and synthesis improved. Additionally, when asked to rate the degree to which they agreed the iPad was a useful tool for cooperative learning, students were quite positive, averaging 4.05 on a 1–5 scale.

4 Frameworks

More research is clearly needed on this small group, in-person, mCSCL context especially for newer technologies like tablets. We specifically wanted to increase the research on the utility of the devices themselves as opposed to looking at different features like video chat or unique apps relevant to only one device. When the tasks are held as constant as possible, what is the role of device in group outcomes and student perceptions?

To begin, we first considered how our research fit with the burgeoning mCSCL body of work. Accordingly, we adopted the Activity Theory (AT) Based Framework for mCSCL discussed in [17]. This framework consists of six elements highlighting different variables within mCSCL that can moderate or influence each other and the outcomes of a group task. The variables include tools/instruments (devices used), subjects (participants/learners), objectives (such as knowledge creation or interaction), rules/control (such as duration or subject matter), context (including physical setting or student perception), and communication/interaction (such as in-person versus online).

The present study can be understood within this context. The tools/instruments included paper, laptop, and tablet. The subjects were small groups of undergraduate students. The objectives were to complete a series of group tasks successfully and have a positive group experience. The rules/control included psychological subject matter and a brief duration. The context included a small room and a formal learning task to receive course credit. And communication/interaction included an in-person discussion between two or three students aided or complicated by the affordances of the provided device. This framework elucidates the different pieces to our approach and how our work might fit with the mCSCL literature. We will return to this model later in the Discussion section.

Recent research by [22] on iPad use also offers a useful framework for consideration of new technologies. They proposed that students’ experience with mobile devices was defined by (1) drivers (characteristics that enhance the experience), (2) moderators (traits that moderate the experience), and (3) speed bumps (traits that impede the experience). [22] reported that drivers for tablets can include access to many applications, portability, and multiple options to scaffold learning. Speed bumps for tablets can include eyestrain, technical problems like stable Internet connection, and distractions. Moderators that could alter perceptions include the professor’s support for the device, clear instructions and communication, and learner emotions in general.

It will be informative to consider possible drivers, moderators, and speed bumps alongside the more contextual AT-mCSCL model when determining how these different devices fare in a small group context. Given that many educational institutions remain hesitant to incorporate mobile devices into the classroom [23], such theoretical application seems crucial for curtailing concerns and determining the best possible recommendations for positive, targeted use.

5 The current study

Our research sought to fill multiple gaps within the literature on technology and groupwork. First, research on mCSCL is growing, but research is minimal when it comes to the context of an academically oriented, brief, in-person, small group task. Much of the research on digital resources in groupwork has focused on the online setting, such as how technology allows for video chat, easy communication and coordination, or the sharing of documents for joint editing [24, 25]. Or, research has highlighted classroom activities more generally, with brief mentions of groupwork [6, 26]. Brief small group activities in the classroom (or structured small group meetings for class projects) are common and should not be overlooked in regard to what physical devices might lend themselves best to that context.

Second, we not only explored laptops, but also tablets. This design choice allowed for a comparison between digital devices, in addition to paper, to elucidate how different technologies might offer unique affordances [13]. As [17] noted, research on mobile devices in the group context is minimal.

Third, we not only manipulated device but also set-up (one shared device versus multiple individual devices). This additional manipulation seemed important given that technology is expensive, and it may not be possible for all students to have a laptop or tablet. And, as dictated by the AT-mCSCL framework, resources and situational context matter.

Fourth, we opted to include both outcome and perception measures to gain a better sense of where variability may lie. As per meta-analyses like [17] and [16], results can vary based on what exactly is being measured. We included four outcome measures at the group level (time spent on task 1, quiz score on material learned during task 1, time spent on task 2, quality score for product produced during task 2) and four individual perception measures (peer ratings, satisfaction level, perceived effort and difficulty). Having multiple dependent variables allowed for a fuller picture of the group experience.

Lastly, we used a mixed methods approach. Qualitative data has the potential to help explain the reasons behind the numbers. We hoped to gain a more in-depth understanding of how paper, laptops, and tablets were being used and perceived in this small group context. Such nuanced data has the potential to better explain any recommendations that are derived from the present research as well as inspire future research.

Taken together, we aimed to extend the prior literature on mCSCL via exploring an understudied context and understudied device alongside consideration of multiple contextual factors and measures. Five research questions and related hypotheses guided our work (see Table 1).

Table 1 Research questions and hypotheses

6 Method

6.1 Participants

One-hundred twenty groups of two to three undergraduate students (N = 300) from a northwestern liberal arts college in the United States completed this study for research credit in a psychology course in 2019–2021. This sample was a convenience sample drawn from the institution’s human subjects pool, selected for ease of accessing the target population (college-aged student). Median age was 19. Students self-reported gender as 63.7% females, 36% males, 0.3% other. Race was classified as 66.7% White, 8.7% Black or African American, 7% mixed race, 5% Asian, 1% American Indian or Alaskan Native, 0.7% Native Hawaiian or Other Pacific Islander, 10% Other, and 1% unreported. Ethnicity was indicated as 76.7% Not Hispanic or Latino, 16.7% Hispanic or Latino, and 6.7% unreported. For class year, 54.7% were first-year students, 24.7% sophomores, 15.7% juniors, and 5% seniors. One additional group participated in the study but was excluded due to experimental error. In a 3 × 2 between-subjects design, groups were randomly assigned to a device (paper, laptop, or tablet) and set-up (whether the group used one shared device or multiple individual devices).

6.2 Materials

Groups reviewed two sets of seven PowerPoint slides. The task 1 slides were on the topic of autism spectrum disorder while the task 2 slides were on the topic of attention deficit hyperactivity disorder. Information was current to the DSM-5 [27], included background information such as characteristics and prevalence, and incorporated in newer information on technology use in children with that diagnosis. One example of slide content is: Digital media in the classroom can help children with ASD learn and communicate • Why? • Can assist with rehearsal and repetition • Fewer social requirements. A second example of slide content includes: Why might children with ADHD be at an increased chance of overusing video games & digital media? • Overall – video games often reinforce behavioral symptoms of ADHD • Bright lights • Constant movement • Frequent change • Immediate feedback and reactions • Dopamine release in brain, e.g., when level up • No writing (which can be a challenge). Both slides included reference to a book by [28], from which much of the information was drawn.

The slides were created by a college professor and similar to what students might encounter in a class session. Students may have been somewhat familiar with the content from an introductory psychology course, but likely were unfamiliar with most details as the specific information went beyond content typically presented in introductory psychology courses.

6.3 Measures

6.3.1 Group outcomes

There were four group outcomes measured in the present study. First, a stopwatch was used to record (a) how long the group used the device to review the task 1 set of slides as well as (b) how long the group used the device to create their own quiz while reviewing the task 2 slides.

Groups also completed a 10-question multiple-choice knowledge quiz on the content from the task 1 slides. To enhance authenticity and thereby validity, the quiz was written and keyed by a college professor with expertise in the subject matter. Each group was given a score out of 10.

Groups also jointly wrote a quiz on the task 2 slides, with the final product being coded on a 1 (low quality) to 4 (high quality) scale. A score of 1 indicated that the quiz content was not accurate to the slides or incomprehensible, 2 was accurate to the slides but some unclear content or all obvious answers with poor alternative options, 3 was accurate to the slides and reasonable answer options but some minor clarity issues, and 4 indicated fully accurate to the slides, written clearly, and with reasonable answer options. Two researchers developed the coding scheme together after extensive review of the quizzes and discussion of what counts as a particular coded number. Reliability between coders was above 90%, with disagreements resolved via discussion between coders.

6.3.2 Individual perceptions

There were four individual perceptions measured in the present study. First, a brief peer review form was completed for each group member (adapted from [29]). Students rated a maximum of 5 points for how much each member had participated in the group project, helped keep the group focused on the task, contributed useful ideas, for the quantity of work performed, and for the quality of worked performed. Ratings across these five questions were averaged to give each group member a peer score with higher scores representing more positive perceptions.

Next, students indicated how satisfied they were with the learning experience, as well as how difficult and effortful they perceived it to be, on a 0–6 scale. Learning research frequently uses similar satisfaction questions and 7-point scales [30,31,32]. Such self-report questions on cognitive load (effort and difficulty) have been found valid and reliable [33, 34]. To explore these ratings more deeply, students were also asked to explain why they had selected that number for satisfaction, effort, and difficulty.

6.3.3 Background survey

To provide background on technology use, students indicated which devices (smartphone, tablet, laptop, desktop) they owned and how frequently they used them (1 rarely to 5 frequently). Students next provided an opinion on what the best and worst part of using their assigned device was in this study. They also provided suggestions for future use of that device within a small groupwork context.

Students then completed a 24-question perceived usefulness questionnaire (adapted from [35]) regarding their tablet and laptop use in the classroom. One example statement is Using an iPad in college would enable me to accomplish tasks more quickly. Students provided a 0 (disagree strongly) to 6 (agree strongly) response.

6.3.4 Demographics

Lastly, students reported their gender, class year, age, race, and ethnicity.

6.4 Procedure

When participants arrived at the lab, they reviewed a consent form with the researcher. Participants then were positioned together at a small table and began task 1. They were asked to review the first set of PowerPoint slides via their pre-assigned device and set-up. The group then jointly completed a multiple-choice paper-based quiz about the information (only one copy of the quiz was available to each group). Moving on to task 2, groups reviewed a new set of slides on their device(s) and created their own quiz on that device as well (under the guise that it would be taken by the next group, when it was actually to just give some additional meaning/weight to the group task). For laptop users, the quiz was typed into a Microsoft Word file. iPad users used the notes app on their tablets and had the option to use the corresponding digital pencil. Paper users simply wrote with a pen on a pad of paper. Following this group construction task, each participant made their various individual ratings and completed the survey on separate, private laptops. Finally, participants were thanked, awarded course credit, and informed of the deception surrounding their written quizzes.

6.5 Analytical approach

To determine differences between devices and set-ups (research questions 1–3), we selected the MANOVA approach given meetings its assumptions of two or more dependent and continuous variables alongside two or more independent and categorical variables. Further, we used a between-subjects design to ensure independence of observations. Given that some measures were taken on groups as a whole and other measures were taken on individuals, we conducted two separate MANOVAs, both with the independent variables of device and set-up. The first MANOVA looked at the group-level variables of task 1 reading time, task 2 groupwork time, task 1 quiz score, and task 2 product quality score. The second MANOVA looked at the individual-level variables of peer ratings, satisfaction level, perceived effort, and perceived difficulty. In both cases, we employed contrasts to examine the device variable more deeply. Contrast 1 compared print to digital devices (combination of tablet/laptop). Contrast 2 compared laptop to tablet, to determine variance specifically between digital devices. To ascertain relationships between peer ratings and individual perceptions (research question 4), we used correlational analyses to relate the variables to one another. While we had full data on all group-level measures, some respondents skipped answering one or more of the individual-level questions. As many participants as possible were included in each analysis.

To gain a better sense of reasons behind our results as well as students’ perceptions more generally (research question 5), we conducted exploratory quantitative and qualitative analyses on the background survey data. This involved thematic analysis as well as extensive coding by two researchers. Via a six-step approach devised by [36], we familiarized ourselves with the responses, developed codes and themes, expanded/developed/merged categories as appropriate, and then embedded the data into this report. Interrater reliability for all coding was above 90%, with disagreements resolved via discussion between coders.

7 Results

7.1 Do group outcomes and student perceptions differ across devices?

When considering if group outcomes differed across devices (see Table 2), the omnibus MANOVA was not significant, Wilks' Lambda = 0.91, F(8,222) = 1.39, p = 0.20, ŋp2 = 0.05. No univariate test reached significance. When looking at the contrasts, a singular test was significant. For time to complete the group task of reviewing slides while making up their own quiz, students were a few minutes quicker if using paper over digital devices (p = 0.04).

Table 2 Means and standard deviations

When considering student perceptions, the omnibus MANOVA was not significant, Wilks' Lambda = 0.99, F(8,452) = 0.37, p = 0.94, ŋp2 = 0.01. No univariate test or contrast reached significance.

These findings yield definitive support for hypothesis 1b, suggesting little to no variability in outcomes or perceptions across devices.

7.2 Do group outcomes and student perceptions differ based on number of devices?

When considering if group outcomes differed by set-up (see Table 2), the omnibus MANOVA was significant, Wilks' Lambda = 0.92, F(4,111) = 2.53, p = 0.04, ŋp2 = 0.08. One univariate test was marginally significant; students took longer to read the task 1 set of slides if multiple devices were available, F(1,114) = 3.62, p = 0.06, ŋp2 = 0.03.

As for student perceptions, the omnibus MANOVA was not significant, Wilks’ Lambda = 0.98, F(4,226) = 1.41, p = 0.23, ŋp2 = 0.02. No univariate tests were significant.

These findings negate hypothesis 2. The number of devices did not make a meaningful difference.

7.3 Do device and number of devices interact?

Furthermore, though the interaction between device and set-up was not significant for group outcomes, Wilks' Lambda = 0.90, F(8,222) = 1.43, p = 0.18, ŋp2 = 0.05, the univariate test for task 2 product quality score did reach significance, F(2,114) = 3.65, p = 0.03, ŋp2 = 0.06. Looking at the averages, a different data pattern was evident in each condition. For the laptop groups, task 2 product quality was near identical regardless of number of devices. For the tablet groups, having multiple devices seemed to facilitate higher task 2 product quality. For the paper groups, having one device seemed to facilitate higher task 2 product quality.

Similarly, the interaction between device and set-up was not significant for student perceptions, Wilks' Lambda = 0.97, F(8,452) = 1.03, p = 0.41, ŋp2 = 0.02. No univariate tests were significant.

As results were near identical across set-ups, hypothesis 3 was not supported. Just one interaction emerged as significant, and it was only partially in line with the previously stated hypothesis.

7.4 Do peer ratings and student perceptions relate?

We did find some support for hypothesis 4. Satisfaction and peer ratings were positively correlated, r(297) = 0.34, p < 0.001. The more satisfied, the higher the peer ratings. Perceived difficulty and effort were also positively correlated, r(233) = 0.35, p < 0.001. Counter to our hypothesis though, peer ratings and satisfaction did not relate to perceived difficulty or effort (rs < 0.10, ps > 0.10).

7.5 What are students’ perceptions of these technologies in general and within the group context?

Given the dearth of research on tablets, and physical devices generally, in a small groupwork context, it seemed important to review students’ general perceptions of these technologies and their utility for groupwork. This section was exploratory but had the potential to meaningfully inform our results as well as unveil directions for future research.

To determine ownership of these technologies, students were asked if they owned a smartphone, tablet, laptop, and desktop. Almost all students owned smartphones (98%) and laptops (97.3%). Tablets were owned by 26.7% of the sample while desktop computers were uncommon (11.7%). Agreeably, when rating use on a 1–5 scale, it was clear that smartphones were used regularly (M = 4.89, SD = 0.45) as were laptops (M = 4.56, SD = 0.67). Tablets (M = 2.63, SD = 1.33) were used intermittently and desktops (M = 2.47, SD = 1.20) were used the least. For our study’s two devices, this information confirms that students had significantly more experience navigating laptops for academic work than tablets.

Students also completed a perceived usefulness questionnaire adapted from [35] to suit tablets and laptops. Students rated laptops (M = 5.03, SD = 0.83) as more useful than tablets (M = 4.01, SD = 1.10) for education, t(299) = 15.94, p < 0.001, d = 0.92. It can be noted though that both averages are on the positive side of the 0–6 scale even if more educational value was assigned to the laptops.

To better understand students’ satisfaction ratings, we asked them to state why they had selected a particular rating. Students were largely satisfied in the study and often just re-stated that point. Some students provided additional clarity, with the most common reason for that satisfaction being feeling like they had learned something interesting (37%) or generally liking the collaborative/group component (19%).

To better understand students’ effort and difficulty ratings, we asked them to state why they had selected their ratings. Difficulty ratings were quite low. 61% of students believed that the group task was easy, while an additional 29% reported some difficulty. The biggest factors contributing to students’ perception of difficulty were groupmates (24%), access to information (19%, including access to instructions and resources/device), the task itself (19%), and collaboration on what to say when answering the test and creating the quiz (15%). Effort ratings were on the lower side, but nearing the midpoint of our scale. 50% of students claimed that the task required little to no effort, while an additional 38% reported exerting some effort. The biggest influences impacting effort were similar to those factors impacting perceptions of difficulty: groupmates (22%), collaboration on what to say when answering the test and creating the quiz (20%), the instructions/information available (14%), and the task itself (12%).

The results presented earlier also showed striking similarity in group outcomes and student perceptions across devices and set-ups. Our open-ended data provides some potential reasons why this similarity existed. We asked students to comment on the best and worst parts of working with their assigned device, as well as to provide suggestions for the future. Below, we have included themes that were mentioned by at least 10% of respondents.

When asked to indicate the benefits of using paper resources for the group activities, students reported that using paper made it easier to transition between slides (33%), was less distracting (18%) improved collaboration (17%), and improved comprehension (13%). As for downsides, many students reported no downsides (31%). Other common responses implicated difficulty with transitioning between the slides (28%) or poor collaboration (21%). When asked to provide suggestions for the researchers on using paper resources for future group activities, recommendations included physical changes to the slides including single-sided pages, writing notes on slides, or staple-less copies (22%), adjusting the number of available devices (18%), or visual design changes to the slides such as adding slide numbers or images (12%). A notable proportion of students provided no recommendations (32%).

For benefits of laptops, the most common responses related to ease of use (74%, with 39% focused more on navigation and 29% focused more on smooth quiz creation). Some students also mentioned liking the display (16%) or a general technology preference (14%). Key drawbacks included difficulty with sharing laptops (16%), physical preference (15%), lack of familiarity with the specific laptop (13%) or simply believing there were no drawbacks (22%). For suggestions for future group projects, many students had no suggestions (33%), but those who gave suggestions focused on general advice like using copy/paste or positioning the laptop a certain way (21%) and using multiple devices (14%).

For benefits of tablets, the most common responses were related to ease of use (75%, with 45% focused more on navigation and 26% focused on multitasking). Some students mentioned liking mobility (19%) and the associated digital tools (11%). Drawbacks included difficulty typing (17%), size of the display (17%), the digital pencil (14%), difficulty sharing one device (12%), physical preference (11%), and lack of familiarity (11%). For suggestions for future group projects, many students gave no suggestions (19%), but those who gave suggestions focused on options for alternative reading/writing methods such as having text-to-speech options (16%), general advice like how to move between the slides and quiz apps (15%), getting a keyboard for the tablet (11%), and offering multiple devices to the group (11%).

8 Discussion

In the common context of small group classroom activities of short duration, it appears that paper, laptops, and tablets can all serve as an effective platform for collaboration. Tellingly, satisfaction levels were definitively positive and difficulty ratings were definitively low across all devices and set-ups. The task here was a simple and straightforward one, requiring the use of just a couple of applications on the devices for a brief duration. This information can prove useful for educators and students alike.

We did note one small difference between paper and digital devices in terms of time spent reviewing the second set of slides and creating their quiz (task 2). Students seemed faster working with paper. This may simply reflect the extra couple of minutes it can take to locate the correct application, open a blank file, scale to the appropriate size, and later save and name the file before closing out of the app. Those steps are not necessary when working with paper. Our task was brief, so such small components could represent a significant proportion of the time spent.

Interestingly, group outcomes and individual perceptions were also equivalent across set-ups. Dhir et al. [6] had noted that coordination when using a single device could be challenging. However, in our work, the only difference was that students who had individual devices took (marginally) longer to read than students who shared a device with other group members. This result makes intuitive sense, as individual devices provide more of an opportunity to move at one’s own pace without feeling as much pressure to speed through. When all students are looking at the same screen, there may be more pressure for a slow reader to move on so as to not delay other group members.

We did note an interesting interaction between device and set-up for task 2 product quality. The number of devices had no influence on task 2 product quality when using laptops. Using multiple devices resulted in higher product quality when using tablets, whereas, when using paper, working with a single shared resource resulted in more favorable product quality scores. Task 2 involved collaborating with one’s group to create a quiz for the next team; it was more sophisticated and extensive than the task 1 graded quiz task. It is possible that this finding reflects the variable affordances of each device. Paper might be more conducive to working collectively on a given task which could explain the improved scores when only a single device was available. In contrast, multiple tablets may be more conducive to breaking up the task at hand and completing those parts independently before coming together at the end. Students may be using the same work strategies regardless of set-up when using laptops. This interaction should be explored via future research on how work strategies play a role in the relationship between the type and number of devices used in collaborative groupwork.

Furthermore, we noted a positive correlation between peer ratings and satisfaction levels. If a student had positive experiences with their group members, they generally felt more satisfied with their experience regardless of the details surrounding device or set-up. Perhaps surprisingly, perceived effort and difficulty levels were not related to peer ratings or satisfaction levels. At first glance, this appears counter to other research showing coherence between such variables [7, 34]. However, that prior research did not typically explore group settings and our task was reported as quite easy. Perhaps in the context of a more challenging task, relationships would have emerged with those variables.

Student perceptions revealed pros and cons to each device, reflective of specific affordances. Interestingly, ease of use was a top benefit for all resources. Drawbacks reflected the lack of affordances of a given device, such as difficulty sharing a larger machine or an unfamiliar technological tool like the digital pencil. Suggestions for future projects sought to amplify a device’s potential within this small group context. For instance, adding a keyboard to ease typing on the tablet or using copy/paste on the laptop. We can also note that students rated laptops as more useful for education than tablets, though both devices were perceived positively. Perceptions can be one roadblock to employing mobile devices successfully in the classroom. Şimşek et al. [37] noted that perceptions regarding aspects like ease of use and usefulness influence one’s attitude about technology, which can influence one’s intention for technology use as well. Past research has indicated that the adoption of smaller, portable devices is typically a slow process in the classroom [23] though can be beneficial for students [8]. Educators may often be overfocused on the potential challenges. Nevertheless, the advantages of mobile, multipurpose devices like tablets are becoming increasingly clear.

9 Theoretical frameworks

Returning to the AT-mCSCL framework [17], our approach and findings elucidate one route towards successful technology integration in a collaborative context. The tools of paper, laptop, and tablet were equally effective for the subjects of undergraduate students interacting in small in-person groups of two to three students. They successfully achieved the objective of completing a series of group tasks and perceiving their group and the task positively. The rules/control of psychological subject matter and a brief duration (< 1 h) helped set up this success in the context of a small room and a formal learning task resulting in course credit. These six elements lay the groundwork for illustrating how mCSCL might unfold in the classroom and further highlight mCSCL as a dynamic, multi-component system. Any of these components would be interesting to explore further in subsequent research; varying one could potentially change the outcomes seen here and act as a moderator.

Turning now to [22] and the discussion of drivers, moderators, and speed bumps in the context of tablets, clear examples of all three components were evident. For drivers, all three devices were easy to use. However, a crucial moderator may be perceived educational value. Similar to the findings of [38], students seemed to express more educational value around laptops than tablets. Such moderators can be barriers to adoption in the classroom. As for speed bumps, tablets remain the less familiar device and laptops continue as the physically larger/heavier device. Future research might explore how to counter the speed bumps reported here.

Overall, these frameworks complement our knowledge derived from the present study via shining a spotlight on areas to target as we incorporate more technology into the classroom. These frameworks also help to unveil the affordances of certain technologies so that we may better tailor the learner experience.

10 Practical implications

First and foremost, educators must carefully review the unique affordances of any device when opting to use it for classroom activities [13]. Both laptops and tablets have the driver of being easy to use, but that does not mean that they are equally suited for all activities. For instance, laptops may be more useful for researching information with many browser tabs open while tablets may be more useful for scrolling an e-book. Educators must consider aspects like screen size, user interface, and functionality when making these course design considerations.

Second, a clear potential speed bump for technology integration is cost. Many students cannot afford such devices on their own [39, 40]. Thus, for educators to effectively integrate mobile devices into their classrooms, colleges must work towards equitable access for students. Perhaps rental programs could be more widespread, more technology-infused classrooms like tablet/computer labs could be available, or other costly fees could be reduced to allow the redirection of those funds.

Third, we must consider further the option of allowing students to select their own best resource for such activities as groupwork. Research showing equivalency between devices on various academic tasks is mounting [38, 41, 42]. Perhaps it is most important simply that all students bring a resource as opposed to a particular device. That said, professor restrictions on technology use in the classroom can often be strict, and they may even ban certain mobile devices [38, 43, 44]. Such restrictions may be outdated and act to hinder learning via forcing students to use resources against their preference. Students often pick screens or paper strategically; [45] suggested that such considerations as cost, speed, searching capabilities, general beliefs, and distraction potential, amongst other affordances, guide student choices.

Lastly, higher education institutions would be wise to offer targeted IT training to educators to discuss the affordances and utility of mobile devices in the classroom. Teacher resources and training directly affect a device’s success in the classroom [46]. As [22] mentioned, the professor is often a key moderating factor in whether or not the technology succeeds in its purpose. Researchers like [47] have supported the need to adjust teacher training programs to include information on different digital environments and their unique affordances for students within, and outside of, classrooms. This direction is necessary in a world full of digital devices.

11 Limitations and future directions

First, our findings only apply to the brief, simple groupwork tasks used here. Future research should compare devices for lengthier or more complex academic tasks. Different drivers and speed bumps may emerge when the context is shifted. Students may adjust their approach as well depending on exact device and available features, physical set-up, and the specific task. Additionally, our quiz scores neared a ceiling effect. Thus, a more challenging task and assessment would be worthwhile. It can also be noted that our groups differed in size and composition; some were only partnerships while others had three members. In some groups, members also knew each other from other settings. Group size and composition could act as contextual factors influencing the effectiveness of the approach (e.g., maybe friends share a device more effectively than strangers).

Furthermore, students completed the task on college-owned devices. We opted for this approach given the desire for experimental control. However, students may respond differently when using a less familiar versus more familiar device. In addition, we only compared paper, laptops, and tablets. A strength of our design was having multiple comparisons. However, looking at similar devices like e-readers and smartphones would further extend our knowledge and finetune our recommendations to educators. It would also be interesting to compare different set-ups. In our work, all group members had the same device. In actual practice, there may be a mismatch amongst members, such as one person having nothing, one person having a tablet, and one person having a laptop. There may be contexts in which mismatching technologies is most useful, such as if different member roles cohere to different affordances of the technologies (e.g., a reader on the tablet and a scribe on the laptop).

Yet another route for future research would be to explore the distinctive features of digital devices. Interestingly, research has indicated that students often do not use the unique features of digital devices, like hyperlinks or audio pronunciations, to their benefit [48, 49]. Future research could explore such unique features further to determine their merit for students.

Lastly, larger-scale research would be exceedingly useful in unveiling best practices for student technology in the classroom. Perhaps akin to [20], students could use devices across a full academic term, for multiple classroom activities, with regular quantitative and qualitative assessments. Such work would further elucidate the contexts in which digital devices might be most useful.

12 Conclusion

Mobile devices are commonplace amongst college students, though they are encouraged to varying degrees in the classroom. As education continues to become a more digitized venture, it is imperative that we examine the ways in which mobile devices can assist students’ learning and academic experience. In the present study, groupwork unfolded equally well regardless of it being accompanied by paper, laptops, or tablets. Though outcomes and perceptions will certainly vary by specific contextual factors, this research provides evidence for one potential academic use for mobile devices in the classroom. Educators should reconsider their policies on mobile devices, perhaps moderating type of use, but not use overall, in the classroom. Educators must also consider the affordances of each device when encouraging their use. Ultimately, these devices can help ease students’ academic lives in multiple ways. Course activities should reflect the contemporary and diverse learning styles of students.