Introduction

Purpose

The post-Soviet nations of Georgia and Ukraine seek to align higher education with democratic development and social progress. Having theorized the potential of fully online collaborative learning for democratization (Blayone, vanOostveen, Barber, DiGiuseppe, & Childs, 2017) and facilitated a pilot course for students in Ukraine (Mykhailenko, Blayone, & vanOostveen, 2016), a broader program of educational-transformation research was launched with partners in several post-Soviet countries. Conducted from socio-cultural (Langemeyer, 2011; Somekh & Nissen, 2011) and human-computer-interaction (Jonassen & Rohner-Murphy, 1999; Kuuti, 1995) perspectives, this program began with an initial probe of student and professor digital competencies in Ukraine (Blayone, Mykhailenko, VanOostveen, Grebeshkov, Hrebeshkova, et al., 2017). Next, a lab-based study comparing self-reported digital competences to recorded digital-learning activities produced an observationally-grounded approach to readiness assessment (Blayone, vanOostveen, Mykhailenko, & Barber, 2017, 2018). Using this approach, this study profiles the digital readiness of higher-education students in Georgia and Ukraine for fully online collaborative learning. The driving purposes are to contribute to ongoing educational-transformation in the post-Soviet world and offer online-learning researchers and practitioners an effective readiness-assessment toolkit.

Post-soviet educational transformation

Ukraine and Georgia share a 70-year Soviet experience that shaped their institutions, psychologies and social values (Raikhel & Bemme, 2016). Since achieving independence in 1991, both nations have pursued multi-level transformations accelerated by peoples’ revolutions (Börzel, 2015; Delcour & Wolczuk, 2015). The resulting experience has included economic distress, loss of security and social benefits (Haerpfer & Kizilova, 2014; Roztocki & Weistroffer, 2015), and socio-psychological “fallout,” such as loss of trust and dissatisfaction with life (Sapsford, Abbott, Haerpfer, & Wallace, 2015). Within this challenging context, Ukraine and Georgia have both taken significant strides towards transforming higher education, joining the Bologna process in 2005 to realign its Soviet institutions with the goals of the European Higher Education Area (Powell, Kuzmina, Yamchynska, Shestopalyuk, & Kuzmin, 2015). These efforts have produced positive results despite some bureaucratic resistance (Raver, 2007) and ongoing practices of corruption (Habibov, 2016).

Importantly, prospects for digital learning are well-supported by developing national ICT infrastructures (Ianishevska, 2017) with both Ukraine and Georgia achieving a top-60 ranking in the 2017 Social Progress Index’s information and communication category (Social Progress Imperative, 2017a, 2017b; Stern, Wares, & Epner, 2017). Moreover, government support for distance learning is increasing (Powell et al., 2015), MOOC providers are making inroads into formal education (Ed-Era, 2017; Prometheus, 2017), and online-learning pilot projects are appearing in the English-language literature (Gravel & Dubko, 2013; Mykhailenko, Blayone, & vanOostveen, 2016; Powell, Kuzmina, Kuzmin, Yamchynska, & Shestopalyuk, 2014). Despite these positive developments, however, limited financial resources and signs of low digital-readiness among students, teachers and administrators remain (Blayone et al., 2017; Synytsya & Manako, 2010).

Conceptual framework

Online learning in higher education

Online learning, like distance learning (Anderson & Dron, 2010), blended learning (Halverson, Graham, Spring, Drysdale, & Henrie, 2014; Palalas, Berezin, Gunawardena, & Kramer, 2015) and mobile learning (Alhassan, 2016; Crompton, Burke, Gregory, & Gräbe, 2016) is a form of digital learning (Siemens, Gašević, & Dawson, 2015)—a melding of learning activities, digital devices and global networks to achieve educational objectives. The practices of online learning are diverse, incorporating many technologies, pedagogies and guiding values (Aparicio, Bacao, & Oliveira, 2016). Some forms of online learning, such as MOOCs (Massive Open Online Courses) focus on making premium educational content globally accessible (De Corte, Engwall, & Teichler, 2016). Others seek to implement scalable learning-management systems that maximize individual flexibility while supporting optional forms of cooperation (Dalsgaard & Paulsen, 2009; Paulsen, 2003, 2008). Still others, such as those developed within the transactional tradition (Garrison & Archer, 2000), emphasize collaborative learning, targeting both the social and cognitive development of participants (Blayone et al., 2017; Garrison, 2017; Swan, 2010; vanOostveen, DiGiuseppe, Barber, Blayone, & Childs, 2016). By integrating the individual and the social dimensions of learning, and foregrounding active participation, open expression, democratic deliberation and collective inquiry, this orientation appears especially well-aligned with the goal of modelling participatory democratic functioning. However, to realize meaningful results from any implementation of digitally-mediated learning, the host environment, digital infrastructure and human participants must achieve a degree of readiness.

Readiness for online learning

Readiness for online learning is an international research domain conceptualizing and measuring, various success factors and enabling conditions. There are numerous readiness models (Alaaraj & Ibrahim, 2014; Darab & Montazer, 2011), instruments (Dray, Lowenthal, Miszkiewicz, Ruiz-Primo, & Marczynski, 2011; Hung, 2016; Hung, Chou, & Chen, 2010; Lin, Lin, Yeh, Wang, & Jansen, 2015), and empirical studies, set in a variety of national contexts (Aldhafeeri & Khan, 2016; Chipembele, Chipembele, Bwalya, & Bwalya, 2016; Gay, 2016; Parkes, Stein, & Reading, 2015; van Rooij & Zirkle, 2016). Researchers generally adopt either a macro-level perspective, addressing the readiness of organizations, regions and countries (Beetham & Sharpe, 2007; Bui, Sankaran, & Sebastian, 2003), or a micro-level perspective, focused primarily on students (Dray et al., 2011; Parkes et al., 2015) or teachers (Gay, 2016; Hung, 2016). At the micro level, digital competencies, defined as knowledge, skills and attitudes supporting purposeful and effective use of technology (Ala-Mutka, 2011), figure as the most prominent set of readiness factors within frameworks (Al-Araibi, Mahrin, & Mohd, 2016; Demir & Yurdugül, 2015) and instruments (Dray et al., 2011; Hung et al., 2010; Lin et al., 2015; Parasuraman, 2000; Pillay, Irving, & Tones, 2007; Watkins, Leigh, & Triner, 2004). However, existing operationalizations tend to be unidimensional and inconsistent, showing little awareness of current, multidimensional digital-competency frameworks (Blayone et al., 2018). To address these shortcomings, researchers at the EILAB, University of Ontario Institute of Technology, Canada, are leveraging the General Technology Competency and Use (GTCU) framework (Desjardins, 2005; Desjardins, Lacasse, & Belair, 2001) and the accompanying Digital Competency Profiler (DCP) for measuring digital readiness for online learning (EILAB, 2017).

A digital readiness framework and profiler

As shown in Fig. 1, the GTCU is a multi-contextual (i.e., applicable to education, work, home, etc.) and multi-dimensional framework for conceptualizing digital-technology uses and related competences. In short, Desjardins identified four human-computer-object interaction types: computational, informational, communicational and technical. The first three were derived directly from the core capabilities of computer hardware (i.e., process, store and transmit) (IEEE, 1990). To account for operational skills and those instances when individuals focus on technology itself (e.g., when a device fails), a technical order of interaction was also introduced. Avoiding complex competence descriptions like some other frameworks (Ferrari, 2013; Vuorikari, Punie, Gomez, & Van Den Brande, 2016), the GTCU conceptualizes effective use by matching interaction types to corresponding sets of knowledge and skills typically developed through frequent and confident computer-mediated activity.

Fig. 1
figure 1

Conceptual overview of GTCU framework, authored by Desjardins (2005)

For the purpose of assessing digital readiness for online learning, the GTCU framework offers five key features. First, by using the core capabilities of computer hardware to conceptualize digital uses and competencies, the GTCU insulates itself from the changing designs of hardware and software platforms, and environmental factors affecting technology use in particular contexts. Second, three of its four dimensions (technical, informational and social) represent a common core among major frameworks (Iordache, Mariën, & Baelden, 2017), and its computational dimension addresses competencies that are achieving prominence in the educational literature (Bocconi, Chioccariello, Dettori, Ferrari, Engelhardt, et al., 2016; Jun, Han, Kim, & Lee, 2014). Third, the GTCU’s online data-collection application—the DCP—has been used repeatedly to profile the technology uses of both students and professors in higher education (Barber, DiGiuseppe, vanOostveen, Blayone, & Koroluk, 2016; Desjardins & vanOostveen, 2015; Desjardins, vanOostveen, Bullock, DiGiuseppe, & Robertson, 2010). Fourth, by incorporating behavioural and attitudinal indicators, and associating items with specific types of devices, the DCP provides a tremendously rich set of data points unmatched by other readiness instruments. Finally, owing to growing international adoption, the DCP has been translated into several languages, and has been used previously in non-Western contexts (Blayone et al., 2017).

Research question

The following research question guided the methodology and analysis: Across four foundational orders of technology use, what is the state of digital-readiness of the Georgian and Ukrainian student cohorts for online learning?

Method

Having obtained approval from the Academic Research Councils of the participating universities and UOIT’s Research Ethics Board, participants were recruited on a volunteer basis from the student population by local officials at Batumi State Maritime Academy (BSMA), Georgia, and Ivan Franko National University (IFNU) of Lviv, Ukraine. Data were collected using the online DCP application during the period of May–July 2017.

Instrument: Digital competency profiler

As shown in Fig. 2, the DCP facilitates data collection, profile visualization and the extraction of raw data. For this study, the DCP data set consisted of: (a) socio-demographic and device-usage items, and (b) 26 indicator groups—five for technical, and seven for each of the communicational, informational and computational dimensions of use. Each group included six action-device items (a single action coupled with different device types), following a common structure: “To perform a software-level action, I use a specific hardware device type.” (A full list of actions is provided in the Appendix.) The DCP includes six device types: computer/laptops (as a single type), smartphones, tablets, gaming systems, computer appliances, and wearable devices.

Fig. 2
figure 2

Digital Competency Profiler, action-device groups and visualizations

The DCP attaches two measures to each action-device item, using 5-point Likert scales. The frequency with which an individual performs a device-specific action is measured using: (1) never, (2) a few times a year, (3) a few times a month, (4) a few times a week, and (5) daily. Frequency of action is an important indicator of digital competency because transferable procedural knowledge is reinforced through repeated activity. The confidence with which an individual performs a device-specific action is measured using: (1) do not know how to use, (2) not confident—require assistance, (3) confident—can solve some problems, (4) fairly confident—can use with no assistance (5) very confident—can teach others. Device-action confidence addresses an individual’s motivation to explore novel situations and problems (Bandura, 1993) with a particular tool. These twin indictors of competency replaced direct claims (“I am able to do x”) in the instrument’s early development. It is expected that individuals are able to differentiate and reliably report, the frequencies with which they perform certain actions and their relative levels of comfort performing an action with a particular type of device (Desjardins et al., 2010; DiGiuseppe, Partosoedarso, vanOostveen, & Desjardins, 2013). All told, the DCP action-device groups provide researchers with 312 points of data.

Validity and reliability

The original DCP survey instrument underwent content validation through the participation of 10 Canadian teachers and parents (Desjardins et al., 2001). Subsequently, six experts joined Desjardins et al. (2001) in a process of construct validation, which included statistical investigation of correlation matrices. All retained items related well to their conceptualized dimension (Desjardins et al., 2001). The current DCP application houses an expanding database populated from ongoing data collection. The aggregate data set has been checked for reliability, with Cronbach’s alpha values ranging from .76 to .94 on the sub-scales. The alpha values for the subset collected for this study ranged from .78 to .88 for the computer/laptop composite (frequency and confidence) scores, and from .8 to .91 for the mobile composite scores.

Although the DCP consists of four sub-scales, with items mapped to a foundational order of human-computer interaction (described above), actual digitally-mediated activity most often possesses characteristics of more than one order (Desjardins, 2005). Consequently, ongoing validation of the DCP has not focused on statistical procedures such as factor analysis (F. Desjardins, personal communication, April 17, 2018). Rather, validation is being pursued by assessing the usefulness and predictive value of the DCP data in specific contexts of application. Recently, this author found strong-positive correlations between learners’ reported DCP competences and their performance levels conducting authentic online-learning activities (Blayone et al., 2017).

Localization

Several localizations of the DCP application were implemented over time. For this study, a Ukrainian localization was prepared, reviewed and tested by two trilingual (Russian, Ukrainian, English) researchers familiar with the field of digital-learning research, in consultation with a native Ukrainian- and a native English-speaking researcher (Blayone et al., 2017). Owing to time constraints, the English version was used in Georgia. As part of the recruitment process, Georgian participants were advised that participation would require reading skills in English. Although this lowered the participant pool, it also generated enthusiasm by highlighting the international scope of the research.

Sample

Students were recruited from the Faculty of Business Management at BSMA and the Department of Management Economics at the IFNU. As shown in Table 1, 150 students (24% of the participating faculty’s student body) from BSMA and 129 students (38% of the participating department’s student body) from IFNU volunteered to complete an online profile. Overall, both undergraduates and graduates participated, primarily between the ages of 17 and 24. More graduate than undergraduates participated from Georgia, which is reflected in the age groupings. In Georgia, 79% were female, and in Ukraine, 69%. This aligns with a reported demographic trend in Ukrainian higher education in which students are over 60% female in the social sciences, business and law (Kogyt, 2016).

Table 1 Socio-demographic characteristics of participants

Analysis

This study adopted a three-step analytical procedure (Fig. 3) derived from recent observational research (Blayone et al., 2017). As a first step, the full DCP data set was reduced to the most relevant indicators for assessing online-learning readiness. The device-ownership data indicated that 57% of Georgian and 75% of Ukrainian participants owned a laptop/desktop. Similarly, 58% of Georgian and 78% of Ukrainian participants owned a smartphone. Only 29% of Georgians and 20% of Ukrainians owned a tablet. Because these are the most relevant devices for online learning, action indicators using other devices were ignored. Therefore, the 26 desktop/laptop items were selected to produce one set of competency scores, and second set of 26 mobile scores were constructed primarily from smartphone items. In eight cases—four in Georgia and four in Ukraine, in which a participant’s tablet values exceeded their smartphone values—tablet data were substituted.

Fig. 3
figure 3

DCP data-analysis methodology

As a second step, the five-point, frequencys and confidence measures were summed to create item-competency scores for each of the device-action items, with 10 as the maximum value (indicating daily use and high confidence). The rationale for summing frequency and confidence measures is rooted in the operational logic of the GTCU framework. The frequency with which an individual performs an activity and the related level of confidence are mutually reinforcing synergistic indicators of digital competence (Blayone et al., 2017). This resulted in 26 desktop/laptop and 26 mobile scores for each participant.

The third step built directly on the strength of the DCP to predict performance most reliably when self-reported competency scores are high or low (Blayone et al., 2017). Adapting this finding, participants were positioned into one of three segments for each action-device item. Participants with scores greater than 6 (of 10) for an action-device item were placed in a high-readiness segment, the members of which would be expected to demonstrate a good degree of effectiveness performing aligned tasks. Those individuals with scores less than 4 for an action-device item were placed in a low-readiness segment, the members of which, without sufficient support, would be expected to demonstrate troubled performances, and require formal support. The middle segment included those individuals for which the DCP less reliably predicts performance (Blayone et al., 2017). Therefore, although performance levels may prove adequate, inferences regarding the expected functioning of these individuals are not drawn. Importantly, these observationally-informed thresholds are consistent with the logic of the twin, 5-point measures (presented in Appendix).

Findings

Findings are organized by the four GTCU dimensions of use. For each dimension, the constituent action-device items are defended as relevant to online learning, and data for each item are presented in a single, tabular format for the BSMA, Georgia and IFNU, Ukraine cohorts. The size of each readiness segment is given as a percent of the host cohort. The high-readiness and low-readiness percentages are bolded in the tables because these are the values from which we draw inferences regarding expected performance. The middle-segment values receive little comment because the DCP less reliably predicts performance within this range (Blayone et al., 2017).

Based on our research and praxis, a suggested guideline for interpreting findings at the group level may be offered. Namely, one might expect good levels of group communication and collaborative-research performance in an online-learning environment when a majority of students in a cohort are positioned in the high-readiness segment, and a small minority (e.g., less than 20%) in the low-readiness segment for actions aligned with course activities. Where a high percentage of students are positioned in a low-readiness segment, the need for substantial support would be expected. For each dimension, the following analytical summaries highlight: (a) the relative strength of desktop/laptop versus mobile usage, (b) the relative sizes of the high- and low-readiness segments within a cohort, and (c) selected patterns of general difference between cohorts—providing a comparative lens for contextualizing the results. The overall aim was to present data in an accessible format and encourage participating institutions to draw further inferences in relation their own learning goals, activity types and selected technologies.

Digital readiness for technical actions

Technical actions include a foundational academic activity (T1: creating/editing a document) and four other items related to successful functioning in online-learning environments. Operational abilities, included in this dimension, are prerequisite to effective functioning in other GTCU dimensions, and can often be acquired quickly when one has sufficient technology access and motivational resources. As shown in Table 2, for all the technical actions (with the exception of creating/editing documents among the IFNU cohort), there are generally more members in the high-readiness segment using mobile devices than using desktop/laptops in both cohorts. This finding highlights the relative strength of mobile-device usage.

Table 2 Digital readiness segments for technical actions

Within the BSMA cohort, 40–63% of students appear in the low-readiness segment across all action-device items. This includes 50% in the low-readiness segment for creating/editing documents (T1) with a desktop/laptop—an essential academic procedure—and 40% when using a mobile device. The high-readiness segment includes 27–33% of the cohort on items T1-T4 using a mobile device. Within the IFNU cohort, large high-readiness segments are found for creating/editing documents (T1: 46% with desktop/laptop, and 43% with mobile) and managing accounts (T4: 47% with mobile, and 33% on a desktop/laptop). Looking across cohorts, despite positioning a consistently high number of students in low-readiness segments, BSMA achieves greater numbers in the high-readiness segment than IFNU in three of five items (T2, T3 and T5). Ukraine, however, achieves the highest readiness numbers in the dimension for creating and editing documents (T1: 46% with a desktop/laptop).

Digital readiness for communicational actions

In online-learning contexts, communicational actions support sharing ideas, building trusting relationships, exploring perspectives, and collaborating towards common objectives. Many of the DCP communicational actions that once defined specific genres of software (e.g., S6, S7, S8, S11 and S12), now appear within multi-purpose applications. Hybrid collaboration platforms such as Slack, for example, support communication, file-sharing, and content publishing. Similarly, social-network environments (S10) continue to gain momentum as multi-purpose platforms in educational contexts (Correa, 2015; Dickie & Meier, 2015; Ellefsen, 2015; Halpern & Gibbs, 2013; Kosinski, Matz, Gosling, Popov, & Stillwell, 2015). Facebook is noteworthy owing not only to its diverse functionality, but also because it is the most popular social platform with over two-billion active monthly users (Statista, 2017). Taken together, the communicational actions defined by the DCP—and accompanying competencies related to socio-emotional and cultural intelligence, privacy and security, and identify representation—are critical for effective participation in increasingly global, online-learning environments. As shown in Table 3, there are again generally more members from both cohorts in the top readiness segment using mobile devices.

Table 3 Digital readiness segments for communicational actions

Within the BSMA cohort, there are consistently large percentages of students (52–65%) in the low-readiness segment across all seven communicational actions with a desktop/laptop. (With mobile devices, the range improves to 42–51%.) Within the IFNU cohort, three mobile-action items (S6: text messaging; S7: audio messaging; and, S10: using social networks) have over 60% of students in the high-readiness segment. For social-network usage alone, 69% appear in the high-readiness segment for desktop/laptops, and 78% for mobile. These findings suggest a strong foundation for ongoing digital-competency development (Correa, 2015), and they highlight the communicational strengths of Ukrainian students noted in a previous study (Blayone et al., 2017). However, the IFNU cohort has at least 70% in the low-readiness segments for sharing one’s works and ideas online (S12)—an important item focused on self-expression. This finding also aligns with results from the previous study (Blayone et al., 2017). Within the BSMA cohort, the consistently large percentages (42–65%) in low-readiness segments across the entire range of communicational items present a development opportunity, especially given that 25–32% of students show high readiness for five items (S6 to S10).

In both cohorts, using collaboration tools (S11) produced large low-readiness segments (BSMA: 62% using desktop/laptop, and 53% mobile; IFNU: 40% using desktop/laptop, and 55% mobile). This finding is coupled with even higher low-readiness segments for sharing one’s works or ideas online (S12) (BSMA: 65% using desktop/laptop, and 51% mobile; IFNU: 70% using desktop/laptop, and 71% mobile), suggesting that frequent use of social networks, which offer affordances for collaboration and content publishing, has not yet been associated with these “serious” activities, or leveraged for such purposes. In the end, compared to IFNU, and given the general popularity of social-networking, BSMA’s low readiness for using social networks stands out in this dimension.

Digital readiness for informational actions

Informational items target interactions between a subject and knowledge artifacts. Searching and accessing journal articles (I14), electronic books (I18) and short videos (I15) are essential research skills. The ability to find quality films (I16) and music (I17)—particularly those available for educational repurposing—is critical when building multimedia objects. Using digital maps (I13) becomes a survival skill when navigating in unfamiliar places, a situation, for example, in which international students frequently find themselves. Finally, content-aggregation tools can dramatically increase the efficiency and effectiveness of online research, especially when coupled with a reference-management application (I19). Therefore, as a group, these seven informational actions address vital digital actions in higher education. As shown in Table 4, once again, there are generally greater numbers of high-readiness users for mobile actions within each cohort. Only for I16 (searching or downloading movies) do we see greater numbers of students in the top segment using desktop/laptops.

Table 4 Digital readiness segments for informational actions

Within the IFNU cohort, there are large numbers of students in the high-readiness segments for searching short videos (I15: 60% with desktop/laptop, and 61% for mobile). The IFNU cohort also shows substantial high-readiness segments for searching journal articles (I15: 33% using mobile), searching movies (I16: 36% using a desktop/laptop) and downloading music (I17: 36% using a desktop/laptop and 44% on mobile). However, there are also large numbers in the low-readiness segment for automating information sources (I19: 81% using a desktop/laptop and 88% on mobile). Within the BSMA cohort, a key finding relates to very large low-readiness segments across all informational items, ranging from 42 to 67% for mobile use, and 59–74% for desktop/laptop use.

Looking across cohorts, the large percentages of students from both cohorts in the low-readiness segment for searching online journal articles (BSMA: 60%, desktop/laptop and 53%, mobile; IFNU: 37%, desktop/laptop and 40% mobile) and electronic books (BSMA: 66%, desktop/laptop and 57%, mobile; IFNU: 48%, desktop/laptop and 41% mobile) is noteworthy. Effectively accessing articles and books are a starting point for university-level research. Overall, where the IFNU cohort show some moderate high-readiness segments in this dimension, the BSMA cohort has significant majorities of students in the low-readiness segment for all desktop/laptop items, and most mobile items.

Digital readiness for computational actions

Computational actions leverage the processing power of digital hardware and software to organize, transform and visualize, numerical and non-numerical data to address complex problems. Functioning effectively in this dimension requires substantial domain knowledge and the ability to assign “cognitive processes” to the computer either through a software application or programming interface. This includes interacting with online calendar systems (E20); data-visualization tools, such as concept-mapping, diagramming and graphing applications (E21, 22 and 24); numerical and statistical-analysis packages (E23 and 25); and scripting/programming environments (E26). Indeed, it is difficult to imagine conducting research today without significant experience in some of these competencies, particularly in an age of “big data” (Bocconi et al., 2016).

As shown in Table 5, and consistent with other studies set in Eastern Europe (Blayone et al., 2017) and Canada (Barber et al., 2016), activities in this dimension, which are usually performed on desktop and laptops, continue to challenge students. For all seven action items in this dimension, a very large percentage of students are positioned in the low-readiness segments in both cohorts (BSMA: 72–78% using desktop/laptops and 55–71% with mobile devices; IFNU: 61–87% using desktop/laptops and 59–96% with mobile devices). Looking across cohorts, BSMA places a slightly greater percentage of their students in the high-readiness segments than IFNU for all items using a desktop/laptop, and five of seven items using a mobile device.

Table 5 Digital readiness segments for computational actions

Discussion

Collaborative forms of fully online learning appear well-aligned with aspirations for educational transformation and democratic development in Ukraine and Georgia. Assessing and building the technology-readiness of learners in these contexts, however, is challenging. Profiling digital competencies with the DCP, and positioning students within high-, medium- and low-readiness segments for a variety of digital interactions, can help guide faculty and administrators during the preparation and implementation phases of online programs.

Large numbers of students in low-readiness segments, like those found in this study, suggest immediate opportunities for skill-development interventions. For example, faculty might introduce greater use of digital devices and activities (e.g., web quests, blogging, social-media posting, etc.) into the current curriculum, and pursue digital “maker” activities (Blikstein, Kabayadondo, Martin, & Fields, 2017; Pangrazio, 2014). Those in the middle segment can be helped to diagnose their readiness level further by attempting a few (instructor-designed) digital-learning scenarios made available online prior to course launch (Blayone et al., 2017). Once a collaborative online course starts, students with high readiness can serve a critical community function: to model best practices and support those who are less comfortable leveraging the technology affordances.

When implementing a fully-online or blended-learning course/program, DCP findings should be used in tandem with a digital-learning model well-aligned with the context and desired outcomes. Two recommendations include the Community of Inquiry (CoI) theoretical framework (Garrison, 2017; Richardson, Arbaugh, Cleveland-Innes, Ice, Swan, et al., 2012) and the Fully Online Learning Community (FOLC) model (Blayone et al., 2017; vanOostveen, 2016; vanOostveen, DiGiuseppe, Barber, & Blayone, 2016). These collaborative models emphasize: (a) active participation, freedom of expression, and critical deliberation (Garrison, 2016); (b) the empowering, connecting and cognitive-partnering qualities of digital-learning tools (Blayone et al., 2017; vanOostveen et al., 2016); (c) “deep learning” instead of rote learning, fostering reflective thinking and cognitive agility (Akyol & Garrison, 2011; Garrison, Anderson, & Archer, 2001); and (d) culture and experience as contextual foundations for building meaningful knowledge (Dewey, 1897).

With a specific model selected, digital-readiness findings can be mapped to target learning processes. For example, the CoI has theorized and validated three key dimensions of online learning—social presence, cognitive presence and teaching presence—which have been operationalized through well-defined elements, categories and indicators (Garrison, 2017; Swan, Garrison, & Richardson, 2009). By using this COI apparatus in tandem with DCP readiness data, one can anticipate the degree to which learning activities are aligned with the technology strengths of a cohort. For example, the strength of Ukrainian students for using social-networks points toward Facebook as a potential environment for building both social presence (SP) and cognitive presence (CP). (Within the CoI, SP relates to building interpersonal trust and open expression, and CP relates to dynamics of collaborative thinking and knowledge building.) It should be noted, however, that the technology-readiness of students remains a necessary but insufficient condition for building successful online-learning experiences. High-quality activity design, strong environmental supports for nurturing student motivation (Deci & Ryan, 2000; Nakamura & Csikszentmihalyi, 2002), and competent online facilitators, are also vital.

Limitations

There are four limitations to note. First, the sample was recruited from the departments to which the contributing authors from BSMA and IFNU were affiliated and this resulted in heavy concentrations of Business majors. Moreover, data were collected via an online application in Ukrainian at IFNU and English at BSMA, which limited access to those without the requisite language skills and Internet connectivity. More generally, under the limitations of the international research partnerships involved, representative samples of the full student bodies at each university were not sought, and therefore, the results obtained are not readily generalizable.

Second, examples attached to some DCP action indictors (e.g., S10 refers to Facebook, Google+, LinkedIn and Twitter, as examples of social-networking systems) are biased towards Western contexts. In much of Eastern Europe, Russian networks such as ВКонтакте (V Kontakte) and Одноклассники (Odnoklassniki) are popular. Importantly, in 2017, Ukraine blocked Russian social networks (Luhn, 2017), which encouraged use of Western platforms. This may partly account for the high-readiness counts among Ukrainians for using social networks, and the differences between Ukrainian and Georgian usage. That is, the examples given may have been less familiar to Georgian students and may have influenced their response.

Third, drawing inferences from self-reported digital competencies in relation to expected patterns of performance is always difficult. The literature reports misalignments between perceived abilities and observed performance using other instruments (Bradlow, Hoch, & Hutchinson, 2002; Hargittai & Shafer, 2006; Litt, 2013). Some also report instrumentation issues related to conceptual ambiguity, incompleteness and over-simplification (van Deursen, Helsper, & Eynon, 2016). We acknowledge these challenges, and in this study, we recognized the inability of the DCP to predict performance levels reliably when moderate digital-competency scores are reported.

Finally, although human capacities to use digital tools effectively are widely considered the most significant set of micro-level readiness factors for successful online learning (Blayone et al., 2018), other micro- and macro-level factors are also important. For example, in post-Soviet contexts, levels of corruption among institutional leaders may limit physical and motivational resources for digital-learning innovation (Habibov, 2016). Moreover, national and regional cultural-values invariably shape student and instructor willingness to function in virtual spaces (Gunawardena, 2014; Mittelmeier, Heliot, Rienties, & Whitelock, 2015; Parrish & Linder-VanBerschot, 2010) and engage in less-structured forms of active learning (Blayone et al., 2017).

Conclusion

Within the frames of a multifaceted, international research program addressing post-Soviet educational transformation (Blayone et al., 2017; Mykhailenko et al., 2016), this study assessed the digital readiness of students for fully online collaborative learning in Ukraine and Georgia. Although large percentages of students in both cohorts appeared ill-prepared for many types of online-learning activity, there were hopeful findings. Among students from the IFNU, Ukraine cohort, large numbers reported high-readiness for communicating via social networks and finding information via social-media sites. Within the BSMA, Georgia cohort, greater percentages of students were found in high-readiness segments for most technical and computational actions than at IFNU, Ukraine. A target-learning-model approach to rendering the data actionable was proposed. In addition, the researchers suggested taking immediate action to encourage greater use of digital technologies in current classroom praxis to develop digital-learning competencies.

We believe this study makes several positive contributions. First, it extends online-learning readiness and digital-competence research to the post-Soviet sphere and introduces a readiness methodology tied to performance analysis. Second, although identifying deep pockets of low digital readiness, it presents several positive findings on which the participating Georgian and Ukrainian institutions might build. Finally, it demonstrates the use of a multi-contextual DCP research apparatus that can be made available to other researchers and practitioners.