Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The ICILS Assessment Framework defines computer and information literacy (CIL) as an “individual’s ability to use computers to investigate, create, and communicate in order to participate effectively at home, at school, in the workplace, and in the community” (Fraillon, Schulz, & Ainley, 2013, p. 18). According to the framework, CIL comprises two strands, each of which is specified in terms of a number of aspects. The strands describe CIL in terms of its two main purposes: receptive (collecting and managing information) and productive (producing and exchanging information). The aspects further articulate CIL in terms of the main processes applied within each strand. These are knowing about and understanding computer use, accessing and evaluating information, managing information, transforming information, creating information, sharing information, and using information safely and securely.

In this chapter, we detail the measurement of CIL in ICILS and discuss student achievement across ICILS countries. We begin the chapter by describing the CIL assessment instrument and the proficiency scale derived from the ICILS test instrument and data. We also describe and discuss the international student test results relating to computer and information literacy.

The content of this chapter relates to ICILS Research Question 1, which focuses on the extent of variation existing among and within countries with respect to student computer and information literacy.

Assessing CIL

Because ICILS is the first international comparative research study to focus on students’ acquisition of computer and information literacy, the ICILS assessment instrument is also unique in the field of crossnational assessment. The instrument’s design built on existing work in the assessment of digital literacy (Binkley et al., 2012; Dede, 2009) and ICT literacy (Australian Curriculum, Assessment and Reporting Authority, 2012). It also included the following essential features of assessment in this domain:

  • Students completing tasks solely on computer;

  • The tasks having a real-world crosscurricular focus;

  • The tasks combining technical, receptive, productive, and evaluative skills; and

  • The tasks referencing safe and ethical use of computer-based information.

In order to ensure standardization of students’ test experience and comparability of the resultant data, the ICILS instrument operates in a “walled garden,” which means students can explore and create in an authentic environment without the comparability of student data being potentially contaminated by differential exposure to digital resources and information from outside the test environment.

The assessment instrument was developed over a year in consultation with the ICILS national research coordinators (NRCs) and other experts in the field of digital literacy and assessment. Questions and tasks were first created as storyboards, before being authored into the computer-based delivery system. The results of the ICILS field trial, conducted in 2012, were used to inform the content of and refine the final assessment instrument. The ICILS technical report (Fraillon, Schulz, Friedman, Ainley, & Gebhardt, forthcoming) provides more information about the development of the ICILS assessment instrument.

The questions and tasks making up the ICILS test instrument were presented in four modules, each of which took 30 minutes to complete. Each student completed two modules randomly allocated from the set of four. Full details of the ICILS assessment design, including the module rotation sequence and the computer-based test interface, can be found in the ICILS Assessment Framework (Fraillon et al., 2013, pp. 36–42).

More specifically, a module is a set of questions and tasks based on an authentic theme and following a linear narrative structure. Each module has a series of smaller discrete tasks,Footnote 1 each of which typically takes less than a minute to complete, followed by a large task that typically takes 15 to 20 minutes to complete. The narrative of each module positions the smaller discrete tasks as a mix of skill execution and information management tasks that students need to do in preparation to complete the large task.

When beginning each module, the ICILS students were presented with an overview of the theme and purpose of the tasks in the module as well as a basic description of what the large task would comprise. Students were required to complete the tasks in the allocated sequence and could not return to review completed tasks. Table 3.1 includes a summary of the four ICILS assessment modules and large tasks.

Table 3.1 Summary of ICILS test modules and large tasks

Data collected from the four test modules shown in Table 3.1 were used to measure and describe CIL in this report. In total, the data comprised 81 score points derived from 62 discrete questions and tasks. Just over half of the score points were derived from criteria associated with the four large tasks. Students’ responses to these tasks were scored in each country by trained expert scorers. Data were only included where they met or exceeded the IEA technical requirements. The ICILS technical report (Fraillon et al., forthcoming) provides further information on adjudication of the test data.

As noted previously, the ICILS assessment framework has two strands, each specified in terms of several aspects. The strands describe CIL in terms of its two main purposes (receptive and productive), while the aspects further articulate CIL in terms of the main (but not exclusive) constituent processes used to address these purposes. We used this structure primarily as an organizational tool to ensure that the full breadth of the CIL construct was included in its description and would thereby make the nature of the construct clear.

The following bulleted list sets out the two strands and corresponding aspects of the CIL framework. Also included are the respective percentages of score points attributed to each strand in total and to each aspect within the strands.

  • Strand 1, Collecting and managing information, comprising three aspects, 33 percent:

    • Aspect 1.1: Knowing about and understanding computer use, 13 percent;

    • Aspect 1.2: Accessing and evaluating information, 15 percent; `

    • Aspect 1.3: Managing information, 5 percent.

  • Strand 2, Producing and exchanging information, comprising four aspects, 67 percent:

    • Aspect 2.1: Transforming information, 17 percent;

    • Aspect 2.2: Creating information, 37 percent;

    • Aspect 2.3: Sharing information, 1 percent;

    • Aspect 2.4: Using information safely and securely, 12 percent.

As stated in the ICILS Assessment Framework, “… the test design of ICILS was not planned to assess equal proportions of all aspects of the CIL construct, but rather to ensure some coverage of all aspects as part of an authentic set of assessment activities in context” (Fraillon et al., 2013, p. 43). Approximately twice as many score points relate to Strand 2 as to Strand 1, proportions that correspond to the amount of time the ICILS students were expected to spend on each strand’s complement of tasks. The first three aspects of Strand 2 were assessed primarily via the large tasks at the end of each module, with students expected to spend roughly two thirds of their working time on these tasks.

Each test completed by a student consisted of two of the four modules. Altogether, there were 12 different possible combinations of module pairs. Each module appeared in six of the combinations—three times as the first and three times as the second module when paired with each of the other three. The module combinations were randomly allocated to students. This test design made it possible to assess a larger amount of content than could be completed by any individual student and was necessary to ensure a broad coverage of the content of the ICILS assessment framework. This design also controlled for the influence of item position on difficulty across the sampled students and provided a variety of contexts for the assessment of CIL.

We used the Rasch IRT (item response theory) model (Rasch, 1960) to derive the cognitive scale from the data collected from the 62 test questions and tasks. In this report, the term item refers to a unit of analysis based on scores associated with student responses to a question or task. Most questions and tasks each corresponded to one item. However, each ICILS large task was scored against a set of criteria (each criterion with its own unique set of scores) relating to the properties of the task. Each large task assessment criterion is therefore also an item in ICILS.

We set the final reporting scale to a metric that had a mean of 500 (the ICILS average score) and a standard deviation of 100 for the equally weighted national samples. We used plausible value methodology with full conditioning to derive summary student achievement statistics. This approach enables estimation of the uncertainty inherent in a measurement process (see, in this regard, von Davier, Gonzalez, & Mislevy, 2009). The ICILS technical report provides details on the procedures the study used to scale test items (Fraillon et al., forthcoming).

The CIL described achievement scale

The ICILS described scale of CIL achievement is based on the content and scaled difficulties of the assessment items. As part of the test development process, the ICILS research team wrote descriptors for each item in the assessment instrument. These item descriptors, which also reference the ICILS assessment framework, describe the CIL knowledge, skills, and understandings demonstrated by a student correctly responding to each item.

Pairing the scaled difficulty of each item with the item descriptors made it possible to order the items from least to most difficult, a process that produces an item map. Analysis of the item map and student achievement data were then used to establish proficiency levels that had a width of 85 scale points and level boundaries at 407, 492, 576, and 661 scale points.Footnote 2 Student scores below 407 scale points indicate CIL proficiency below the lowest level targeted by the assessment instrument.

The described CIL scale was developed on the basis of a transformation of the original item calibration so that the relative positions of students’ scaled scores and the item difficulties would represent a response probability of 0.62. Thus, a student with ability equal to that of the difficulty of a given item on the scale would have a 62 percent chance of answering that item correctly.

The width of the levels was 85 scale points. We can assume that students achieving a score corresponding to the lower boundary of a level correctly answered about 50 percent of items in that level. We can also expect that students with scores within a level (above the lower boundary) correctly answered more than 50 percent of the items in that level. Thus, once we know where a student’s proficiency score is located within a given level, we can expect that he or she will have correctly answered at least half of the questions for that level, regardless of the location of his or her score within the level.

The scale description comprises syntheses of the common elements of CIL knowledge, skills, and understanding at each proficiency level. It also describes the typical ways in which students working at a level demonstrate their proficiency. Each level of the scale references the characteristics of students’ use of computers to access and use information and to communicate with others. The scale thus reflects a broad range of development, extending from students’ application of software commands under direction, through their increasing independence in selecting and using information to communicate with others, and on to their ability to independently and purposefully select information and use a range of software resources in a controlled manner in order to communicate with others. Included in this development is students’ knowledge and understanding of issues relating to online safety and ethical use of electronic information. This understanding encompasses knowledge of information types and security procedures through to demonstrable awareness of the social, ethical, and legal consequences of a broad range of known and unknown users (potentially) accessing electronic information.

In summary, the developmental sequence that the CIL scale describes has the following underpinnings: knowledge and understanding of the conventions of electronic information sources and software applications, ability to critically reason out and determine the veracity and usefulness of information from a variety of sources, and the planning and evaluation skills needed to create and refine information products for specified communicative purposes.

The scale is hierarchical in the sense that CIL proficiency becomes more sophisticated as student achievement progresses up the scale. We can therefore assume that a student located at a particular place on the scale because of his or her achievement score will be able to undertake and successfully accomplish tasks up to that level of achievement.

Before constructing the scale, we examined the achievement data in order to determine if the test was measuring more than one aspect of CIL in discernibly different and conceptually coherent ways. Given the distinction in the ICILS assessment framework between Strands 1 and 2, we investigated whether the data were indeed describing and reporting these separately.

We found a latent correlation between student achievement on the two strands of 0.96. We also found that the mean achievement of students across countries varied little when we analyzed the data from Strands 1 and 2 separately. As a consequence, and in the absence of any other dimensionality evident in the data,Footnote 3 we concluded that CIL could be reported in a single achievement scale. Although the ICILS assessment framework leaves open the possibility that CIL may comprise more than one measurement dimension, it does “not presuppose an analytic structure with more than one subscale of CIL achievement” (Fraillon et al., 2013, p. 19).

Table 3.2 shows the described CIL scale. The table includes descriptions of the scale’s contents and the nature of the progression across the proficiency levels from 1 to 4. A small number of test items had scaled difficulties below Level 1 of the scale. These items represented execution of the most basic skills (such as clicking on a hyperlink) and therefore did not provide sufficient information to warrant description on the scale.

Table 3.2 CIL described achievement scale

Students working at Level 1 demonstrate familiarity with the basic range of software commands that enable them to access files and complete routine text and layout editing under instruction. They recognize not only some basic conventions used by electronic communications software but also the potential for misuse of computers by unauthorized users.

A key factor differentiating Level 1 achievement from achievement below Level 1 is the range of software commands students can use. Students working below Level 1 are unlikely to be able to create digital information products unless they have support and guidance. Key factors differentiating Level 1 achievement from achievement at the higher levels are the breadth of students’ familiarity with conventional software commands, the degree to which they can search for and locate information, and their capacity to plan how they will use information when creating information products.

Students working at Level 2 can demonstrate basic use of computers as information resources. They are able to locate explicit information in simple digital resources, select and add content to information products, and exercise some control over laying out and formatting text and images in information products. They demonstrate awareness of the need to protect access to some electronic information and of possible consequences of unwanted access to information. A key factor differentiating Level 2 achievement from achievement at the higher levels is the extent to which students can work autonomously and with a critical perspective when accessing information and using it to create information products.

Students working at Level 3 possess sufficient knowledge, skills, and understanding to independently search for and locate information. They also have ability to edit and create information products. They can select relevant information from within electronic resources, and the information products they create exhibit their capacity to control layout and design. Students furthermore demonstrate awareness that the information they access may be biased, inaccurate, or unreliable. The key factors differentiating achievement at Level 3 from Level 4 are the degree of precision with which students search for and locate information and the level of control they demonstrate when using layout and formatting features to support the communicative purpose of information products.

Students working at Level 4 execute control and evaluative judgment when searching for information and creating information products. They also demonstrate awareness of audience and purpose when searching for information, selecting information to include in information products, and formatting and laying out the information products they create. Level 4 students additionally demonstrate awareness of the potential for information to be a commercial and malleable commodity. They furthermore have some appreciation of issues relating to using electronically-sourced, third-party intellectual property.

Example ICILS test items

To provide a clearer understanding of the nature of the scale items, we include in this section of the chapter a set of example items. These indicate the types and range of tasks that students were required to complete during the ICILS test. The tasks also provide examples of responses corresponding to the different proficiency levels of the CIL scale. The data for each example item included in the analysis (including calculation of the ICILS average) are drawn only from those countries that met the sample participation, test administration, and coding requirements for that item.

The example items all come from a module called After-School Exercise. This module required students to work on a sequence of discrete tasks associated with planning an after-school exercise program. The students were then asked to create a poster advertising the program. The five discrete tasks immediately below serve as examples of achievement at different levels of the CIL scale. They are followed with a description of the After-School Exercise large task and a discussion of the scoring criteria for the task, with the latter presented within the context of achievement on the CIL scale.

The five discrete task items

Example Item 1 (Figure 3.1), a complex multiple-choice item, required the participating ICILS students to respond by selecting as many check boxes as they thought were appropriate.

Figure 3.1
figure 1figure 1figure 1figure 1

Example Item 1 with framework references and overall percent correct

Example Item 1 illustrates achievement at Level 1 on the CIL scale. This item was the first one that students completed in the After-School Exercise module, and it asked them to identify the recipients of an email displaying the “From,”’ “To,” and “Cc” fields. The item assessed students’ familiarity with the conventions used within email information to display the sender and recipients of emails. In particular, it assessed whether students were aware that people listed in the Cc field of an email are also intended recipients of an email. Sixty-six percent of students answered Example Item 1 correctly. The achievement percentages across countries ranged from 30 percent to 85 percent.

Example Item 2 (Figure 3.2) was the second item students completed in the After-School Exercise module. Note that Example Items 1 and 2 use the same email message as stimulus material for students, thus showing how questions are embedded in the narrative theme of each module.

Figure 3.2
figure 2figure 2figure 2figure 2

Example Item 2 with framework references and overall percent correct

The email message in Example Item 2 told students that they would be working on a collaborative web-based workspace. Regardless of whether students read the text in the body of the email when completing Example Item 1, the tactic of giving them the same email text in the second item was authentic in terms of the narrative theme of the module. This was because students’ interaction with the first item (a complex multiple-choice one) meant they did not have to navigate away from the email page when using the internet. This narrative contiguity is a feature of all ICILS assessment modules.

Example Item 2 required students to navigate to a URL given as plain text. Ability to do this denoted achievement at Level 2 of the CIL scale. Although the task represents a form of basic navigation, it was made more complex by presenting the URL as plain text rather than as a hyperlink. In order to navigate to the URL, students needed to enter the text in the address bar of the web-browser (by copying and pasting the text from the email or by typing the characters directly into the taskbar) and then to activate the navigation by pressing enter or clicking on the green arrow next to the taskbar. The task required students to know that they needed to enter the URL into the taskbar. They also needed to have the technical skill to enter the text correctly and activate the search. This set of technical knowledge and skills is why the item reflects Level 2 proficiency on the CIL scale.

Scoring of Example Item 2 was completed automatically by the computer-based test-delivery system; all methods of obtaining a correct response were scored as equivalent and correct. Forty-nine percent of students answered Example Item 2 correctly. The percentages correct ranged from 21 to 66 percent across the 21 countries.

Example Item 3 (Figure 3.3) also illustrates achievement at Level 2 on the CIL scale. We include it here to further illustrate the narrative coherence of the CIL modules and also the breadth of skills that are indicative of achievement at Level 2.

Figure 3.3
figure 3figure 3figure 3figure 3

Example Item 3 with framework references and overall percent correct

Example Item 3 was one of the last items leading up to the large task in the After-School Exercise module. Previously, the narrative sequence of the module had required students to navigate to a collaborative workspace website and then complete a set of tasks associated with setting up an account on the site. Now, in order to accomplish the task in Example Item 3, students had to allocate “can edit” rights to another student who was, according to the module narrative, “collaborating” with the student on the task. To complete this nonlinear skills task,Footnote 4 students had to navigate within the website to the “settings” menu and then use the options within it to allocate the required user access. The computer-based test-delivery system automatically scored achievement on the task. Fifty-four percent of students answered Example Item 3 correctly. The crossnational percentages ranged from 16 percent to 74 percent.

Example Items 4 and 5 (Figures 3.4 and 3.5) focus on students’ familiarity with the characteristics of an email message that suggest it may have come from an untrustworthy source. These two items are set within the part of the module narrative requiring students to create their user accounts on the collaborative workspace. After setting up their accounts, students were presented with the email message and asked to identify which characteristics of it could be evidence that the sender of the email was trying to trick users into sending him or her their password.

Figure 3.4
figure 4figure 4figure 4figure 4

Example Item 4 with framework references and overall percent correct

Figure 3.5
figure 5figure 5figure 5figure 5

Example Item 5 with framework references and overall percent correct

Example Item 4 provides one aspect of the developing critical perspective (in this case relating to safety and security) that students working at Level 3 on the CIL scale are able to bring to their access and use of computer-based information. The highlighted email greeting in the item signals that this piece of text forms the focus of the item. Students were asked to explain how the greeting might be evidence that the email sender was trying to trick them. Students who said the greeting was generic (rather than personalized) received credit on this item. Twenty-five percent of students answered the item correctly. The percentages across countries ranged from 4 percent to 60 percent.

The students’ written responses to this open response item were sent to scorers in each country by way of an online delivery platform. All scorers had been trained to international standards.Footnote 5

Example Item 5 required students to evaluate a different highlighted aspect of the same email they considered in Example Item 4. In Example Item 5, students’ attention was focused on the sender’s email address. The team developing the assessment instrument contrived this address to appear as an address registered under a “freemail” account. (National center staff in each country adapted and translated the address to fit the local context.) Note that the root of the address differs from the root of the address the sender provided in the hyperlink presented in the body of the email.

Student responses were scored as correct if they identified the email as a trick either because it originated from a freemail account (and not a company account) or because it did not match the root of the hyperlink they were being asked to click on. Successful completion of the item illustrates achievement at Level 4, the highest level on the CIL scale. It required students to demonstrate sophisticated knowledge and understanding of the conventions of email and web addresses in the context of safe and secure use of information. On average, across ICILS countries, 16 percent of students answered Example Item 5 correctly. The crossnational percentages ranged from 3 to 28 percent.

Example ICILS large-task item

The large task in the After-School Exercise test module required students to create a poster to advertise their selected program. Students were presented with a description of the task details as well as information about how the task would be assessed. This information was followed by a short video designed to familiarize them with the task. The video also highlighted the main features of the software students would need to use to complete the task.

Figure 3.6 shows the task details screen that students saw before beginning the After-School Exercise large task. It also shows the task details and assessment information that students could view at any time during their work on the task.

Figure 3.6
figure 6figure 6

After-School Exercise: large task details

As evident from Figure 3.6, students were told that they needed to create a poster to advertise an after-school exercise program at their school. They were also told that the poster should make people want to participate in the program. They were then instructed to select an activity they thought would be most suitable for inclusion in the program from a website provided to them within the test environment. The website, Healthy Living, was one they had encountered during their work on the earlier tasks in the module. The upper half of Figure 3.7 shows the large task as presented to students. The bottom half of the figure shows the home page of the Healthy Living website.

Figure 3.7
figure 7figure 7

After-School Exercise: large task and website resource

Students were also provided with a list of minimum necessary content to include in the poster: a title, information about when the program would take place, what people would do during the program, and what equipment/clothing participants would need. Students were also told that the program should last 30 minutes and be targeted at participants over 12 years of age.

At any time during their work on the large task, students could click on the magnifying glass button to see a summary list of the task’s scoring criteria. These related to the suitability of the poster for the target audience, its relevance, the completeness of its information, and the layout of its text and images. The assessment criteria given to the students were a simplified summary of the detailed criteria used by the expert scorers.

The After-School Exercise large task was presented to students as a blank document on which they could create their poster using the editing software. The software icons and functions matched the conventions of web-based document editors. In addition, all icons in the software included “hover-over” text that brought up the names of the related functions. While these icons were universal across the ICILS test environment, all hover-over labels were translated into the language(s) of administration in each country. Add text:

The following software features were available for students to use to create the poster:

  • Add text: When students clicked on the “Tt” icon, a dialogue box opened that allowed them to add text. The text then appeared in a text box on the poster. Students could also reopen text boxes and edit the contents.

  • Edit text: The text entry dialogue box included a small range of formatting features—font color, font size, bold, underline, text alignment, and numbered or bulleted lists.

  • General editing: Students could cut or copy and paste text (such as from the website material), undo and redo images, and revert the poster to its original state (i.e., to start again) by using the icons to the right of the screen. They could also move and resize all text boxes and images by clicking and dragging.

  • Change background: When students clicked on a background presented on the left of the screen, the poster background changed to match the selection. The task developers deliberately set the default background and text color to gray. This meant that students who used only the default settings could only receive credit for using effective color contrast (such as black on white) if they manipulated the color of at least one of the elements.

  • Insert images: At the left of the screen, students could toggle between backgrounds (shown in Figure 3.7) and images that they could include in their presentation. Students could insert selected images by clicking and dragging them into the poster. Once inserted in the poster, images could be freely moved and resized.

At the top of the screens shown in Figure 3.7 are clickable website tabs that allowed students to toggle between the poster-making software and the website they had available as an information resource. This website offered information about three forms of 30-minute exercise activities—skipping, Pilates, and fencing. Students could find additional information about each program by clicking on the links within the website. They could also choose any activity (or combination of activities) to be the subject of the poster.

The pages about each activity contained a range of information about it, some of which was relevant within the context of the information poster and some of which was irrelevant. Once students had selected their preferred activity or activities, they needed to filter out the irrelevant information. Students could copy and paste text from the resources into their poster if they wished. They could also insert images shown in the websites into their poster.

When students had completed their poster, they clicked on the “I’ve finished” button, an action which saved their poster as the “final” version. (The test delivery system also completed periodic automatic saves as a backup while students were working on their tasks.) Students then had the option of exiting the module or returning to their large task to continue editing.

Once students had exited the module, the final version of the poster was saved in preparation for later scoring by trained scorers within each country. These people scored each poster according to a set of 10 criteria (later reduced to nine in the process of data analysis). As was the case for the constructed response items described previously, data were only included in analyses if they met IEA standards for scoring reliability.

The large tasks in the ICILS test modules were all scored using task-specific criteria. In general, these fell into two categories: technical proficiency and information management. Criteria relating to technical proficiency usually related to elements such as text and image formatting and use of color across the tasks.

Assessment of technical proficiency typically included a hierarchy from little or no control at the lower end to the use of the technical features to enhance the communicative impact of the work at the higher end. The criteria thus focused on ability to use the technical features for the purpose of communication rather than on simply an execution of skills. Criteria relating to information management centered on elements such as adapting information to suit audience needs, selecting information relevant to the task (or omitting information irrelevant to it), and structuring the information within the task. Some criteria allowed for dichotomous scoring as either 0 (no credit) or 1 (full credit) score points; others allowed for partial credit scoring as 0 (no credit), 1 (partial credit), or 2 (full credit) score points.

The manifestation of the assessment criteria across the different tasks depended on the nature of each task. For example, information flow or consistency of formatting to support communication in a presentation with multiple slides requires consideration of the flow within and across the slides. The After-School Exercise large task comprised a single poster. As such, the scoring criteria related to the necessary elements and content of an information poster.

Table 3.3 provides a summary of the scoring criteria used for the After-School Exercise large task. Criteria are presented according to their CIL scale difficulties and levels on the CIL scale as well as their ICILS assessment framework references, relevant score category and maximum score, the percentage of all students achieving each criterion, and the minimum and maximum percentages achieved on each criterion across countries. Full details of the percentages that students in each country achieved on each criterion appear in Appendix B.

Table 3.3 Example large-task scoring criteria with framework references and overall percent correct

The design of the large tasks in the ICILS assessment meant that the tasks could be accessed by students regardless of their level of proficiency. The design also allowed students across this range to demonstrate different levels of achievement against the CIL scale, as evident in the levels shown in the scoring criteria in Table 3.3.

Each of Criteria 2, 5, 8, and 9 takes up a single row in Table 3.3 because each was dichotomous (scored as 0 or 1), with only the description corresponding to a score of one for each criterion included in the table. Each of Criteria 1, 3, 4, 6, and 7 was partial-credit (scored as 0, 1, or 2). Table 3.3 contains a separate row for the descriptions corresponding to a score of one and a score of two for each of these criteria. In most cases, the different creditable levels of quality within the partial-credit criteria correspond to different proficiency levels on the CIL scale. For example, the description of a score of one on Criterion 3 is shown at Level 2 (553 scale points), and the description of a score of two on the same criterion is shown at Level 4 (673 scale points).

We can see from Table 3.3 that two scoring criteria for the poster corresponded to Level 1 on the CIL scale. These both related to students’ use of color and reflected students’ familiarity with the basic layout conventions of electronic documents. Overall, 80 percent of students were able to demonstrate some planning in their use of color to denote the role of different components of the poster. Sixty-eight percent of students could ensure that at least some elements of the text in the poster contrasted sufficiently with the background color to aid readability.

Color contrast was a partial credit criterion. The ICILS scoring system automatically scored the relative brightness of the text and background against an adaptation of relevant criteria in the Web Contents Accessibility Guidelines 2.0 (WCAG 2.0). The ICILS technical report provides full details of this process (Fraillon et al., forthcoming).

Human scorers then looked at the automatically generated score for each poster and could either accept or modify the score. Students whose control of color contrast was basic received one score point. Basic color contrast meant that the student used the same text color throughout the poster, used color that did not contrast strongly with the background, or used a range of text colors, with some contrasting well and others contrasting poorly with the background. Students whose posters exhibited sufficient color contrast for all text elements to be read clearly received two score points. These students’ achievement aligned with the higher levels of planning control characteristic of Level 3 on the CIL scale.

Four scoring criteria corresponded to Level 2 achievement on the CIL scale. One of these—use of full page—was dichotomous and so appears at Level 2 only. Students were told in the task brief that the quality of the poster’s layout was one of the scoring criteria for the task. The other aspect of layout under consideration was whether or not the student used the full space available on the poster. Students who used the full space rather than leaving large sections of it empty received credit on this criterion.

Level 2 achievement on the scale was also exemplified by posters that included two of the three pieces of information that students were instructed to provide, that is, when the program would take place, what people would do during it, and what equipment/clothing they would need. Posters with some evidence of the use of formatting tools to convey the role of different text elements also exemplified Level 2 achievement. Each of these two categories represented the one-score-point category in the partial credit criteria. The first criterion related to the completeness of information the students provided and the second to students’ ability to plan and control their formatting of text elements. Achievement at Level 2 was evidenced by inconsistent or incomplete attempts to meet these criteria.

Students were instructed to include a title in their poster, and this was scored according to its layout and content. The title needed to represent the notion of an exercise program or refer to the activity the student selected in order to be eligible to receive credit. The level of credit on this criterion was then determined according to the layout and formatting of the title. Posters in which the title was situated in a prominent position on the page were credited with a single score point. This level of credit corresponded to 492 CIL scale points, which is on the boundary between Levels 1 and 2 of the scale. Posters in which the title was both in a prominent location and formatted to make its role clear exemplified Level 2 achievement on the scale.

Table 3.3 furthermore shows that, overall, the percentages of students achieving success on the four Level 2 criteria ranged from 46 percent (some control of text formatting and layout and use of full page) to 55 percent (two of the three requisite pieces of information included in the poster). The examples of achievement at Level 2 on the poster are indicative of students who can demonstrate some degree of control in executing procedural skills relating to layout and information.

At Level 3, students’ execution of the posters shows greater control and independent planning than at the lower levels. Five categories of criteria indicated Level 3 achievement. Two of these criteria focused on students’ ability to include images in their posters and to make their posters persuade readers to participate in the program. The inclusion of at least one image properly laid out in the posters and evidence of some attempt to persuade readers are both indicative of Level 3 achievement.

Also at Level 3 were the consistent use of color in order to denote the meaning of text elements (the full credit category of the partial credit criterion referred to in Level 1), inclusion of all three requisite pieces of information (the full credit category of the partial credit criterion referred to in Level 2), and some adaptation of information taken from the website resources for use in the poster (the partial credit category of a criterion for which full credit is at Level 4).

The use of information in the posters at Level 3 typically showed evidence of independent planning extending beyond completion of the procedural aspects of the task. The posters also included evidence of attempts to fulfill their persuasive purpose. In addition to being relevant, the information included in the posters needed to show evidence of having been adapted to some extent rather than simply copied and pasted into the poster. In essence, Level 3 posters could be positioned as complete products that were largely fit for purpose.

The overall percentages of students achieving at each of the five categories of Level 3 achievement criteria ranged from 23 percent (sufficient contrast to enable all text to be seen and read easily) to 40 percent (one or more images well aligned with the other elements on the page and appropriately sized).

Two categories of scoring criteria on the After-School Exercise large task were evidence of Level 4, the highest level of achievement on the CIL scale. Each category was the highest (worth two score points) within its partial credit criterion. Posters at Level 4 showed a consistent use of formatting of the text elements so that the role of all the elements was clear. This attribute is an example of software features being used to enhance the communicative efficacy of an information product.

Students completing posters at this level were able to go beyond simple application of commands to deliberately and precisely use the software tools so that the text’s layout (through such features as bulleted lists, indenting, and paragraph spacing) and format (e.g., different font types, sizes, and features) provided readers with consistent information about the role of the different elements on the poster. Those reading the poster would be immediately clear as to which text represented headings or body information and why the information had been grouped as it had (i.e., to convey different categories of meaning within the poster). In short, these students could use formatting tools in ways that enabled readers to understand the structure of information in the poster and thus gain intended meaning from it. elevant

At Level 4, students could furthermore select relevant information about their chosen activity and adapt it, by simplifying or summarizing it, for use in the poster. As noted above, the information presented in the website was discursive, containing detail relevant (e.g., explanation of the activity and equipment) or irrelevant (e.g., the history of the activity) to the explicit purpose of the poster. Although Level 4 might represent an aspiration beyond the capability of most young people in the ICILS target age group, some of the surveyed students did do work commensurate with this level of achievement. Overall, 15 percent of students used the formatting tools sufficiently consistently throughout the poster to show the role of the different text elements. Seven percent of students were able to select the relevant key points from the resources and adapt them to suit the purpose of the poster.

Comparison of CIL across countries

Distribution of student achievement scores

Table 3.4 shows the distribution of student achievement on the CIL test for all countries and benchmarking participants. The length of the bars shows the spread of student scores within each country. The dotted vertical lines indicate the cut-points between proficiency levels. The average country scores on the CIL scale ranged from 361 to 553 scale points, thereby forming a range that spanned a standard of proficiency below Level 1 to a standard of proficiency within Level 3. This range was equivalent to almost two standard deviations. The distribution of country means is skewed. The range in mean scores from Chile to the Czech Republic shown in Table 3.4 is 66 scale points. Two countries, Thailand and Turkey, with respective means of 113 and 126 scale points,Footnote 6 sit below Chile. Table 3.4 shows, in effect, a large group of countries with similar mean CIL scale scores, and two countries with substantially lower scores.

Table 3.4 Country averages for CIL, years of schooling, average age, ICT Index, student–computer ratios and percentile graph

Table 3.4 also highlights, through the length of the bars in the graphical part of the table, differences in the within-country student score distributions. The standard deviation of scores ranges from a minimum of 62 scale points in the Czech Republic to 100 scale points in Turkey.Footnote 7 The spread appears to be unrelated to the average scale score for each country. Also, the variation in student CIL scores within countries is greater than that between countries, with the median distance between the lowest five percent and the highest five percent of CIL scores being around 258 scale points. Thailand and Turkey have the largest spread of scores, with 316 and 327 respective score points between the lowest five percent and the highest 95 percent of CIL scale scores in those countries.

The differences between the average scores of adjacent countries across the highest achieving 12 countries shown in Table 3.4 are slight. In most cases, the difference is fewer than 10 scale points (one tenth of a standard deviation). Larger differences are evident between Slovenia and Lithuania (16 scale points) and Thailand and Turkey (13 scale points). The average scale score of students in Thailand is, in turn, 113 scale points below the respective average of students in Chile.

CIL relative to the ICT Development Index and national student–computer ratios

Table 3.4 provides information about the average age of students in ICILS countries, the ICT Development Index for those countries,Footnote 8 and the student–computer ratio in each country. The ICILS research team considered the ICT Development Index and student–computer ratio as means of ascertaining the digital divide across countries. Although this term is a broad-reaching and sometimes contested one, it most commonly refers to the notion of people in societies having varying degrees of opportunity to access and use ICT (see, for example, van Dijk, 2006, p. 223). Where, in this section, we include the ICT Development Index as a means of comparing general access to technology across countries, we also include the student–computer ratio to compare the students’ access to computers at school across countries.

The relevant information in Table 3.4 suggests a strong association between a country’s average CIL achievement and that country’s ICT Development Index score. We recorded, at the country level, a Pearson’s correlation coefficient of 0.82, an outcome which suggests that the higher the level of ICT development in a country, the higher the average CIL achievement of its eighth-grade students.

When interpreting this result, it is important to take into account the relatively small number of countries as well as the fact that the two countries with the lowest ICT Development Index scores (Thailand and Turkey) had much lower CIL average scores than all other countries. However, when we removed these two countries from the Pearson calculation, the correlation between average CIL scores and the ICT Development scores remained strong at 0.62.

We also found a strong negative association across countries between the student–computer ratio and a country’s average CIL. We recorded a correlation coefficient of −0.70, which suggests that, on average, students had higher levels of CIL in countries with fewer students per computer. This relationship is consistent with the association between the CIL performance and ICT Development Index scores.

However, it is also important, when interpreting this result, to take into account the relatively small number of countries and, in particular, the fact that the country with the lowest CIL average, Turkey, had a much higher ratio of students to computers (80:1) than other ICILS countries had. When we removed Turkey from the calculation, the correlation coefficient between average CIL scores and student–computer ratio dropped to −0.26 (or −0.32 when we included the Canadian provinces).

Pair-wise comparisons of CIL

The information provided in Table 3.5 permits pair-wise comparisons of CIL scale score averages between any two countries. An upwards pointing triangle in a cell indicates that the average CIL scale score in the country at the beginning of the row is statistically significantly higher than the scale score in the comparison country at the top of the column. A downwards pointing triangle in a cell indicates that the average CIL scale score in the country at the beginning of the row is statistically significantly lower than the scale score in the comparison country. The unshaded cells (those without a symbol) indicate that no statistically significant difference was recorded between the CIL scale scores of the two countries. The shaded cells on the diagonal from top left to bottom right of the table are blank because these cells represent comparisons between each country and itself.

Table 3.5 Multiple comparisons of average country CIL scores

Table 3.5 also helps us determine whether relatively small differences in average CIL scale scores are statistically significant. The spread of the empty cells around the diagonal shows that the mean of student CIL in most countries was typically not statistically significantly different from the means in the three to five countries with the closest means but significantly different from the means in all other countries. The only exceptions to this pattern can be seen at the extreme ends of the achievement distribution, which, at the lower end, further illustrate the skew of the distribution described previously.

Achievement across countries with respect to proficiency levels

The countries in Table 3.6 appear in descending order according to the percentage of students with scores that positioned them at Level 4 on the CIL scale. The order of countries in Table 3.6 is similar to that in Table 3.4, where the countries are shown in descending order of average score. Smaller differences in the ordering of countries between the two tables are a result of different distributions of students across the levels within the countries that have similar average student CIL scores.

Table 3.6 Percent of students at each proficiency level across countries

The data in Table 3.6 show that, across all countries, 81 percent of students achieved scores that placed them within CIL Levels 1, 2, and 3. Overall, however, the distribution of student scores across countries sits within Level 2. In all countries except Thailand and Turkey, the highest percentage of students is evident at Level 2. The percentage of students in Level 2 in these countries varies between 48 percent in the Czech Republic and 36 percent in Korea. In Thailand and Turkey, 64 and 67 percent respectively of students are below Level 1. In total, 87 percent of students in Thailand and 91 percent in Turkey were achieving at Level 1 or below.

Although majorities of students in most countries had CIL scores at Level 2, we can see some variation in the distribution of percentages across these countries. In six countries with the highest percentage of students at Level 2—Korea, Australia, Poland, the Czech Republic, Norway (Grade 9), and Ontario—the proportion of students above Level 2 (i.e., at Levels 3 and 4 combined) is higher than the proportion of students below Level 2 (i.e., at Level 1 or below). In the remaining eight countries, that is, those countries with the highest percentage of students in Level 2 (the Slovak Republic, the Russian Federation, Croatia, Germany, Lithuania, Chile, Slovenia, and Newfoundland and Labrador), the number of students above Level 2 is smaller than the number of students below Level 2.

Conclusion

The ICILS assessment, the development of which was based on the ICILS conceptual framework, provided the basis for a set of scores and descriptions of four described levels of CIL proficiency. Those descriptions articulate in concrete form the meaning of the construct computer and information literacy. It and related constructs have until now lacked an empirically based interpretation that could underpin measurement and analysis of this form of literacy.

Our comparisons of CIL scores showed considerable variation across the participating ICILS countries. In the five highest-performing countries, 30 percent or more of the student scores could be found at Levels 3 or 4. In contrast, for the two lowest-achieving countries, only one or two percent of students were achieving at Levels 3 or 4. More than 85 percent of the student achievement scores in these two countries were below Level 2. For all other countries, 31 percent of student scores sat, on average, below Level 2.

There was also considerable variation within countries. On average, the achievement scores of 80 percent of students extended across 250 score points or three proficiency levels. The variation within countries was greatest in Turkey, Thailand, and the Slovak Republic and lowest in the Czech Republic, Slovenia, and Denmark.

Across countries, CIL average scores were positively associated with the ICT Development Index, and negatively associated with the ratio of students to computers. ICILS included these indices and their associations with CIL in the hope of inspiring more detailed investigations into the relationship, within and across countries, between access to ICT and CIL.