I-LEARN and the Assessment of Learning with Information



Assessment is a staple of formal learning environments and a useful concept for monitoring learning in informal information-rich environments as well. This chapter surveys the recent history of the assessment movement and positions the I-LEARN model as a framework that is especially well suited to a contemporary assessment of learning with information. The model is consistent with both traditional and current approaches to assessment, its structure lends itself to the design of assessment instruments, and it addresses current and emerging thinking about using information as a tool for learning. Above all, it provides a mechanism for linking learning and assessment in a holistic, authentic, and satisfying experience. Different in tone and structure from the preceding chapters, this final chapter draws together and expands ideas introduced throughout the book. It closes the loop about learning in information-rich environments with a discussion of how I-LEARN can promote and assess such learning not only in today’s information-rich environments but in those of the future as well.


Information Literacy Summative Assessment School Librarian Common Core State Standard 21st Century Skill 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Assessment—determining what learners know and have learned—has been a part of formal education since at least the fifth century BC. When Socrates asked questions of his students in Athens, he was, in effect, conducting a kind of formative assessment: that is, he used their initial answers to ascertain their underlying knowledge and then continued questioning to help them correct and expand that knowledge. History does not tell us whether Socrates administered a summative assessment—a final exam—at the end of the process, but we can be confident that he was concerned about his students’ ultimate achievement as well as their progress along the way.

Over the following centuries, assessment changed dramatically. Not only has it become far more complex and formalized, it has often become divorced from its origins as a teaching tool. In the twentieth century, summative assessment emerged as one of the primary foci of the modern education establishment. Today, not only assessing what students have learned but also documenting their attainment of specific outcomes at the conclusion of an instructional experience is a leading factor in American (and other) educational policy. High-stakes summative assessment—the kind in which test scores are used not only to certify achievement but also to permit advancement or graduation and to determine competitive advantage in further study—is in place around the world. The SAT, ACT, and Advancement Placement exams in the United States; the O-Level (or GSCE) and A-Level exams in most Commonwealth countries; the Matura in much of Eastern Europe; and the Abitur in Germany, Austria, and Finland all exemplify this approach. Some broader conceptions of assessment—that is, conceptions that once again link assessment directly to teaching and learning—are beginning to appear, but the predominant view of assessment early in the twenty-first century still involves very high stakes for faculty and schools as well as for students.

Against this background, the question becomes: What role does assessment, both formative and summative, play in today’s information-rich environments, both formal and informal? What is its relevance, its contribution, to the kind of learning discussed in this book? To answer that question, it is critical both to understand key contemporary views of assessment and to explore how they might be adopted or adjusted to meet the needs of today’s learners.

6.1 Evolving Views of Assessment

Contemporary views of assessment are evolving, and a brief review of how they emerged over the past twenty years will set the stage for understanding that evolution and its current status. Arguably, in the United States the most important factor during this period has been the growth of national interest in summative assessment in formal education. Since the National Governors’ Summit first identified educational goals for the country in 1989, this kind of assessment has become the siren song for legions of educators. Scholars, curriculum specialists, and policy makers at all levels have worked along two parallel tracks: (1) to identify concepts and skills that students should master through instructional experiences and (2) to craft tools and measures for assessing that mastery at the end of courses or programs of study.

In the 1990s, national professional organizations published hundreds of “standards” that specified such concepts and skills across elementary-, middle-, and secondary-school disciplines (collected in Kendall and Marzano’s 650-page tome published in 2000). State and local educational agencies adapted these national statements to create their own lists of standards that soon constituted their official (or at least virtually official) curricula. Because the statements captured what disciplinary experts had identified was important for students to learn, they were readily transcribed into “scope and sequence” documents that led, in turn, to specific lessons and other curricular materials that drove what students were taught across the country.

By and large, the standards reflected in all these developments are summative in nature: that is, they describe the final results, or “outcomes,” of students’ learning rather than the process students might take to achieve these results. The term “outcomes” suggests as much. And once such standards/outcomes have been established, assessing students’ achievement of them is the inevitable next step: “Assessment is inherent in the idea of standards. The reason for specifying [them] is to provide statements about what is important for students to learn so that, in turn, instructors can evaluate how well students have achieved those outcomes. Assessment is the intrinsic, unavoidable flip side of standards” (Neuman 2000, p. 111). Not surprisingly, the development of standards was soon followed by the development of instruments to judge students’ achievement of them.

Indeed, stating explicit outcomes in a clear and consistent way and using them as the basis for assessment has strong theoretical advantages and has been a cornerstone of instructional design and development for decades. Arguably, in fact, today’s focus on assessment began over fifty years ago, with the publication of Bloom’s original Taxonomy of Educational Objectives in 1956. The taxonomy used “illustrative educational objectives selected from the literature” (p. 201) to suggest assessments at each of the six levels of learning, from “knowledge” to “evaluation,” specified in the Taxonomy. A statement like “The student shall know the methods of attack relevant to the kinds of problems of concern to the social sciences” (p. 203) indicates that outcomes of the kind created in the 1990s were already in place decades earlier.

Years of research and experience with the outcomes-assessment approach—guided by early authors like Mager (1962) and Briggs (1977) as well as by Bloom (1956) and many others—have yielded a strong body of theory and practice to guide the creation of an outcomes-instruction-assessment continuum that has many advantages. It forces both instructors and designers to identify the major concepts and skills they want learners to master, helps teachers direct instruction toward those outcomes, and eliminates at least some of the subjectivity from grading. It informs learners of what is most important for them to learn, taking the guesswork out of learning and helping more of them achieve higher levels of understanding. The approach has endured largely because of these advantages. Contemporary authors like Wiggins and McTighe (1998, 2005) have counseled a modified version of assessing students’ performance against specified outcomes through their notion of “backward design”: “starting with the end (the desired results) and then identifying the evidence necessary to determine that the results have been achieved (assessments)” (p. 338). While outside the traditional literatures of both the instructional-design and assessment communities, the “backward design” movement has had widespread influence in the K-12 community in recent years.

It was the implementation of No Child Left Behind (signed into law in 2002)—coupled with the emergence of the wave of standards that emerged in the 1990s—that thrust the American interest in summative assessment to a new level. Although standardized tests had been used to evaluate schools and students for decades, the new law’s requirements for high-stakes standardized testing at specific grade levels and in specific subject areas spawned a large and lucrative cottage industry of publishers and others who developed state-level tests that have become the ultimate measure of students’—and schools’—success. During the first decade of the new century, educators, parents, and society at large were using such tests to reward and punish schools, to design plans for school improvement, and to reinforce the value of real estate to families who want to live where there are “good schools” (see, for example, Brasington and Haurin 2006, 2009; Haurin and Brasington 1996). For many, this focus on assessment seemed to make Lake Wobegon—“where all the children are above average”—the new American educational Utopia.

6.2 Looking Ahead

In general, No Child Left Behind has been deemed successful at its major goal—reducing disparities in achievement in both math and reading between majority and minority students at all levels of American schooling. The law has many critics, but a national focus on assessing student achievement is sure to persist. One indication of this persistence is the effort to create national “Common Core State Standards, K-12” in mathematics and English language arts. Released in June 2010, the standards had been adopted by over thirty states plus the District of Columbia by the following August (see http://www.corestandards.org/ for up-to-date information.) Sure to be adopted by more states as they complete their formal procedures, the standards specify foundational skills and knowledge that are important for college and workforce training programs across the country. These outcome statements—and the assessments that are sure to follow them—are already broadening the discussion of K-12 assessment.

Another example that suggests the persistence of assessment is Microsoft’s Partners in Education Transformation Project. Formed in 2009 with Cisco and Intel, the project offers severe criticism of both traditional schooling and traditional assessment but still promotes assessment as the primary strategy for achieving what it calls “transformative reform.” The project’s “Assessment Call to Action”—one of the first documents it released—calls for “specify[ing] high-priority skills, competencies, and types of understanding that are needed [by] productive and creative workers and citizens of the 21st century and turn[ing] these specifications into measurable standards and an assessment framework” (Assessment Call to Action, p. 2; italics added). The project, along with its influential backers, thus presents assessment as an integral part of broadening our understanding of the kinds of learning that are most important today and of dealing with them in a holistic manner. Perhaps most significantly, the project interweaves learning and assessment into its image of comprehensive reform (see http://www.microsoft.com/education/programs/transformation.mspx).

A parallel yet very different development emerged early in the new century, when associations that accredit colleges and universities and the individual programs they offer started to focus on what students actually learned in these venues rather than only on what resources were brought to bear on their educations. Two quite different associations exemplify the range of this effort: the American Library Association (http://www.ala.org/), which accredits only Master of Library Science programs within higher education, and the Middle States Commission on Higher Education (http://www.msche.org/), which accredits entire degree-granting colleges and universities in “Delaware, the District of Columbia, Maryland, New Jersey, New York, Pennsylvania, Puerto Rico, the U.S. Virgin Islands, and several locations internationally.” Both organizations—and others as well—now require applicants for accreditation to specify learning outcomes for their students and to demonstrate that programs do in fact lead students to these outcomes.

In many cases, the movement of this focus on learning assessment into higher education brought to a new audience the idea of formally identifying learning outcomes both within courses and across programs. Fueled not only by accreditation agencies but also by a public—and some state legislatures—wondering if higher education is worth its increasingly higher cost, the focus on outcomes and outcomes assessment continues to grow across postsecondary education. Despite concerns that the approach brushes up against the tradition of academic freedom (at least in part because specifying outcomes implies organizing the curriculum around them), individual faculty and their institutions are now writing learning outcomes much as their K-12 colleagues do. Clearly, the idea of learning assessment has thoroughly penetrated the realm of formal education.

6.3 Assessment and Learning with Information

Educators concerned with the use of information for learning did not escape the standards-and-assessments wave of the 1990s. As noted in  Chap. 4, several national organizations developed information-literacy standards and information-technology standards for both K-12 and postsecondary audiences. Information Power: Building Partnerships for Learning (American Association of School Librarians and Association for Educational Communications and Technology 1998) and Objectives for Information Literacy Instruction: A Model Statement for Academic Librarians (Association of College and Research Libraries 2001) covered the information literacy landscape in formal education. The original National Education Technology Standards (NETS) (International Society for Technology in Education 1998) covered the information technology landscape for K-12 students; in subsequent years, the NETS for students have been revised (2007), while NETS for teachers (2000, 2008) and administrators (2002, 2009) have been added to the mix. Once again, Bloom’s Taxonomy (1956) was called upon to guide the creation of outcome statements in this arena as well: an outcome such as “Judges the accuracy, relevance, and completeness of sources and information” (American Association of School Librarians and Association for Educational Communications and Technology, p. 14) clearly exemplifies the use of the Taxonomy by targeting learning in the domain of information literacy at the “evaluate” level.

The standards-and-assessment wave—particularly in regard to using ­information—began to alter its course in the early part of the new century with a pair of initiatives developed in tandem. In 2003, the Partnership for 21st Century Skills (www.21stcenturyskills.org) produced Learning for the 21 st Century: A Report and MILE (Milestones in Learning and Education) Guide for 21 st Century Skills; in the same year, the Educational Testing Service (ETS) (www.ets.org) released a white paper entitled Succeeding in the 21 st Century: What Higher Education Must Do to Address the Gap in Information and Communication Technology Proficiencies. Like the statements of essential information-literacy competencies and skills developed by these initiatives and introduced in this book in  Chap. 4, these “assessment” documents offered a welcome new focus to those concerned with learning in information-rich environments. Both documents broke with the previous decade’s focus on subject-area standards to focus on establishing cross-disciplinary standards and assessments related to what are now called “information and communication technologies,” or the ICTs delineated in this book in  Chap. 3. “ICT literacy,” which puts information at the core of learning, was defined as:

the ability to use digital technology, communication tools, and/or networks appropriately to solve information problems in order to function in an information society. This includes the ability to use technology as a tool to research, organize, evaluate and communicate information and the possession of a fundamental understanding of the ethical/legal issues surrounding the access and use of information. (Educational Testing Service 2003, p. 11)

Subsequent efforts at ETS spawned the development of an ICT Literacy Assessment at the postsecondary level, which is based on ETS’s seven components of ICT literacy: defining an information need, accessing information, managing information, integrating information from multiple sources, evaluating information, creating new information, and communicating information. This assessment joined the bank of assessments in a variety of areas that ETS has been building for sixty years, giving learning with information a new status in the postsecondary world. The ICT Literacy Assessment in turn evolved into an iSkills ™ research and assessment program, which continues to focus on these components and to develop a more comprehensive approach to understanding and assessing them. (See, for example, white papers produced for ETS by Tyler 2005 and Katz 2005, 2007.)

Subsequent efforts of the Partnership for 21st Century Skills led to the publication of its Framework for 21 st Century Learning in 2004. This document, supported by some forty organizational “partners,” specifically suggests a holistic view that links learning and assessment and offers “a unified, collective vision for 21st century learning [italics added] that will strengthen American education” across the board. As noted in  Chap. 4, the Framework includes eleven “core subjects” (traditional curricular categories like language arts and science) and four “21st century themes,” including such topics as “global awareness” and “civic literacy.” Most significantly for learning with information, the document offers three sets of skills that support students’ mastery of each of those fifteen core subjects and contemporary themes: “learning and innovation skills,” “life and career skills,” and “information, media, and technology skills.” The Framework’s marriage of “information” skills and “media and technology skills” bridges ideas inherent in earlier sets of information-technology and information-literacy standards noted above. And through its identification of “information, media, and technology skills” as necessary for mastering all the subjects and themes, the Framework also moves learning with information—and assessing such learning—into a key position in its “holistic view.”

Not surprisingly, the concepts and language of the Framework for 21 st Century Learning are embedded in the new Microsoft/Cisco/Intel Partners in Education Transformation Project described above: Microsoft, Cisco, and Intel are all among the approximately forty “member organizations” that support the Partnership for 21st Century Skills. Like these three, a number of the partner organizations are international in scope, suggesting that the Microsoft/Cisco/Intel project’s influence will not be limited to North America. In fact, this initiative states that it has formal plans to “examine innovative ICT-enabled classroom-based learning environments and formative assessments that address 21st century skills and draw implications for ICT-based international summative assessments and for reformed classroom practices aligned with assessment reform” (Assessment Call to Action 2009, p. 2). Calling specifically for assessment reform that will drive instruction to focus on learners’ mastery of the information skills that are at the heart of ICT literacy, this initiative provides perhaps the most striking contemporary imperative for assessing learning with information.

6.4 I-LEARN and Assessing Learning with Information: Formal Environments

Whether we look to the standards that undergird most educational practice today or to the alternatives suggested by the Educational Testing Service and the Partnership for 21st Century Skills, we find no lack of outcome statements that can be used to describe and assess what it means to use information as a tool for learning. And in formal educational environments—still generally organized by disciplinary categories and held accountable for students’ mastery of those categories—the clear statement of outcomes and the development of instruments to assess mastery of them is an approach that is likely to remain no matter what statements are adopted. Whether viewed as holistic or discrete, learning—including learning with information—will continue to be defined at least in part by outcome statements.

Within formal settings, the I-LEARN model explained in  Chap. 5 provides a useful scaffold for assessing students’ ability to use information as a tool for learning. Grounded in learning theory, tied to the structure of information literacy, and linked both conceptually and practically to Anderson and Krathwohl’s (2001) update of Bloom’s original Taxonomy of Educational Objectives (1956), the model is situated in traditional ideas of learning and assessment but expands them to encompass newer approaches as well. As shown in Fig. 6.1, it includes six stages and eighteen elements drawn directly from the theory and practice of learning with information and provides a framework for assessing such learning as well as fostering it. For example, the model’s first five stages state specific outcomes that can be readily assessed through corresponding evaluation items: the learner will Identify a problem, Locate information about it, Evaluate the information according to specific criteria, Apply the appropriate information to construct knowledge, and Reflect on the process and product of that construction. The last stage—kNowing what has been learned—is not directly assessable but speaks to the holistic nature of learning and to the model’s ultimate step of internalizing kNowledge so that it can be used in the future.
Fig. 6.1

I-LEARN stages and elements

Overall, the model provides a guide for creating and implementing assessment across the full continuum of possible information-literacy outcomes; its breadth and flexibility allow its application for both formative and summative assessment. At the “identify” stage, for example, learners can be assessed on the degree to which they generate a problem or question that is substantive and information-based: a question that taps learning that requires remembering factual knowledge—the party affiliation of one’s local congressional representative, for example—would be less impressive than one that taps learning that requires metacognitive knowledge—perhaps analyzing the electoral process at the state or national level. Similarly, at the “apply” stage, generating a new (to the learner) understanding of the definition of the Chi-square statistic—conceptual knowledge—would be less impressive than generating a new understanding of how to apply the Chi-square test to a particular statistical problem—which requires, at the very least, procedural knowledge. Variations on this scaffold are, of course, almost infinite—enabled and constrained by learners’ needs, teachers’ abilities, curricular goals, whether the assessment is formative or summative, and a host of additional conditions and circumstances. Nevertheless, the possibilities suggested by the model’s links to Anderson and Krathwohl’s (2001) types of knowledge and levels of learning as these relate to information provide the basis for an intriguing assessment tapestry.

Even without investigating such a tapestry, it is clear that I-LEARN’s stages and elements for learning with information could readily be assessed by conventional strategies, be they test items or checklists or criteria on a rubric. Morrison et al. (2004) identify a dozen or more assessment tools that might be used to evaluate various aspects of learners’ ability to use information to learn: multiple choice, true/false, matching, short-answer, and essay tests as well as checklists, performance ratings, problem-solving exercises, and rubrics. A multiple-choice item, for example, might require learners to “identify,” within an array of choices, the best example of a question that can be answered with information; a problem-solving exercise might require them to “locate” appropriate sources to answer an information-based question. As in other subject areas, a test bank of such items could be developed, administered, and graded for students at any level.

Harada and Yoshina (2005) add personal conferences, activity logs, personal correspondence, and graphic organizers like concept maps, idea webs, K-W-L charts, and matrices to the mix of tools for assessing learning with information. Such approaches lend themselves especially well to formative assessment but can be used in summative assessments as well. In terms of I-LEARN, they might involve learners in creating matrices comparing their “evaluations” of a variety of information sources according to criteria learned in a class—for example, authority, relevance, and timeliness as these facets pertain to a particular project. Or it might involve creating concept maps or idea webs that show the results of their “application” of information to answer a question—the relationships among climate, topography, and altitude on exports from Brazil, for example—along with citations to the sources used to find that information. Overall, Harada and Yoshina’s (2005) advocacy for using visual displays as assessment tools offers an intriguing option to more traditional assessment approaches. These authors suggest a number of ways that visual displays could be used to assess students’ use of information as a tool for learning.

One of the most popular assessment tools today, in both K-12 and higher education, is the rubric—an instrument in the form of a grid that identifies the components of a task, the criteria for assessing the quality of each completed component, and scores that correspond to the instructor’s judgment about a learner’s level of performance related to those criteria (Strickland and Strickland 2000). Arguably, the rubric is also the most promising tool for assessing students’ ability to learn with information in a formal setting. Rubrics’ inherent connection to the process of learning and their strength in addressing both that process and its outcome make it ideal for assessing what is essentially process-based learning: the process of using information to generate knowledge. Giving a learner a rubric in advance allows that learner to see specifically what is expected, to work toward that expectation, and to determine for him- or herself the degree of success attained. A rubric also allows for iterative formative assessment and enables an instructor to provide targeted feedback to a learner by explaining how that learner excelled or fell short in a particular area. Ultimately, then, a rubric allows an instructor to provide guidance for improving both the process and the outcome of learning. Using rubrics is thus fully consistent with both formative assessment, whose goal is improved understanding and performance, and summative assessment, whose goal is to document the outcome of the learning process. As Harada and Yoshina (2005) note, “A well-designed rubric is both a tool for assessment and a powerful teaching strategy” (pp. 21–22).

Fig. 6.2 illustrates a generic rubric that might be adapted to any subject area to evaluate students’ understanding of each of the stages of learning with information outlined in I-LEARN. Assessing a learner’s achievement at each step would provide information about how well he or she grasped the pieces of the process, while assessing the learner’s ability to make links across these steps would provide ­information about his or her understanding of the overall process of learning with information. The assessment might be formative (judging how well students master each step and providing guidance where needed) and/or summative (judging ­students’ “final” levels of understanding of each step and of the overall process). The range of possible scores—from a high of 20 for a student who scores a 4 for each step to a low of 5 for a student who is unsuccessful at each—provides ample room for a teacher to provide nuanced feedback that would tell a student how well he or she performed at each stage and element and what components of learning with information need additional attention.
Fig. 6.2

I-LEARN assessment rubric

Like any rubric, this one can be tweaked to reflect the content of the learning experience at hand. For example, it could reflect the difference in “timeliness” when evaluating a learner’s use of information in relation to a historical event like the Vietnam War and that same student’s use of information for a report on the contemporary issue of climate change. Similarly, it could be tweaked to reflect the particular resources providing the information: adding a requirement that students look beyond the first three “hits” provided by a search engine’s weighting algorithm would be useful in some settings but not in others. It could include a focus on how effectively students incorporated the learning affordances associated with their final information products (see  Chaps. 2 and  3). And, of course, it could be tweaked to reflect an individual teacher’s understanding of particular students’ abilities and needs: the kind of representation or information product expected of middle schoolers would obviously differ from the kind expected of graduating seniors. While there is still much to be learned about using I-LEARN in practice, Fig. 6.2 provides the scaffolding for one of the tools that might be developed to support its use for both formative and summative assessment in schools.

6.4.1 A Curriculum for Learning with Information?

Formal assessment is usually related to a formal curriculum, and the question of whether there should be an “information literacy” curriculum surfaces regularly within the research and professional community of school librarians and library media specialists. Conventional wisdom—buttressed by extensive research (see, e.g., Eisenberg et al. 2004; Kuhlthau 1987; Loertscher and Wools 2002)—holds that information-skills instruction should be integrated with instruction in subject areas so that it is meaningful to students and so that they will remember it from year to year. Even if the new focus on ICTs elevates instruction in this area to curricular status, such instruction would have to be related to—if not anchored in—other subject-matter areas.

I-LEARN lends itself to integrated instruction because of its general nature and because of its emphasis on process rather than only on outcomes. Fig. 6.3 shows how instruction based on the model might be integrated with curricular content to teach students not only that content but also the knowledge and skills to use information as a tool for mastering it. The figure also suggests the model’s utility as a way to link instruction in using information as a tool for learning and the assessment of students’ achievement.
Fig. 6.3

I-LEARN and formal instruction: A seventh-grade social-studies activity

This structure—which suggests both curriculum and assessment—lends itself not only to K-12 use but to higher education environments as well. Since the release of Objectives for Information Literacy Instruction: A Model Statement for Academic Librarians (Association of College and Research Libraries) in 2001, college and university libraries have come under increasing pressure to demonstrate their value by showing, among other things, a connection to student learning. In recent years, the Association of College and Research Libraries has released a series of documents designed to guide instruction in how to use the library and its resources and in how to conduct library-based research: Information Literacy Standards for Science and Technology (2006), Research Competency Guidelines for Literatures in English (2007), and Information Literacy Standards for Anthropology and Sociology Students (2008) (all available at http://www/ala.org/ala/mgrps/divs/acrl/standards/infolit.cfm).

Constructing modules for students in different majors according to the I-LEARN scaffold could provide an efficient and effective approach to helping undergraduates gain the information skills they need. Modules related to questions about particular issues in chemistry, in Russian literature in translation, in ethnographic methods, etc., could help students wrestle with areas their instructors identify as important as well as mastering the information skills required to do research in those areas. Collaboration among faculty, librarians—and students themselves—could lead to rich and enduring experiences of learning sophisticated concepts related to complex curricular topics and high-level information skills. Teaching the stages and elements of the model explicitly would give students a tool that could support their learning with information within and beyond the curriculum. Using that tool to guide assessment would create a link between learning and assessment that could result in a holistic and authentic experience for learners.

6.5 I-LEARN and Assessing Learning with Information: Informal Environments

There’s no denying the continuing prominence of assessment in formal learning environments—in the United States and around the world. For both pedagogical and political reasons, assessment is here to stay. Even the Microsoft/Intel/Cisco Partners in Education Transformation Project—arguably one of the harshest critics of contemporary assessment models—wants to “transform” these models rather than eliminating them entirely: “Assessment reform is key to the transformation of the educational system as a whole” (Assessment Call to Action 2009, p. 5).

Transforming the educational system may be a worthy goal, but it overlooks the vast amount of learning that occurs outside that system. Informal information-rich environments like public libraries, museums, movie theaters, and the Internet  /  Web provide tremendous opportunities for learning—and for failing to learn. The patron who cannot navigate the library’s collection, the visitor who fails to recognize the context of a particular museum display, and the movie-goer who hasn’t mastered at least a few film conventions (Salomon 1979) all truncate their opportunities for learning within those venues. Perhaps most importantly, the Web user who doesn’t recognize a world beyond Google and/or Wikipedia misses a virtual world of opportunity to locate, evaluate, and use high-quality information. The effects can range from the simple to the critical: the unskilled movie viewer who doesn’t understand Alfred Hitchcock’s “in joke” of appearing in almost all his movies might miss a moment of pleasure, but the unskilled Internet/Web user who doesn’t understand the importance of evaluating information for authority might make a fatal choice about health care.

And just as learning continues well beyond the educational system, so should the assessment of that learning—especially when assessment is defined as an integral part of the learning process. In fact, the need for self-assessment is even greater for “information learners” in informal environments precisely because such environments do not directly support learning with curricular categories, instructional materials, teachers, and school librarians. Learners themselves bear the responsibility for judging and augmenting their own abilities to create knowledge. They take no tests and answer to no authorities. They are the designers and assessors of their own abilities to use information as a tool for learning.

Of course, the kinds of assessment that are useful in informal information-rich environments are markedly different from the standard assessments that drive much of formal education: when both the content and the “audience” for these assessments shift from the purview of others to the realm of personal responsibility, tools for assessment must be seen in a very different light. Here, too, I-LEARN offers an opportunity to assess—and improve—one’s ability to use information as a tool for learning. Simply invoking the six stages as a mnemonic can remind informal learners of the kinds of concepts and skills that are important in learning with information. Calling into play at least some of the specific elements within these six stages can also enhance such users’ success as learners. Fig. 6.4 provides an example of how the informal learning-and-assessment process might work at an exhibit in a museum, while Fig. 6.5 suggests how it might work during a Web search. The last entry for each stage—assessment—illustrates how the rubric presented in Fig. 6.3 can be applied with both examples.
Fig. 6.4

I-LEARN and informal learning: A trip to a museum

Fig. 6.5

I-LEARN and informal learning: Learning with the World Wide Web

Of course, the “museum” description above is artificial—no World War II buff or neophyte is likely to proceed exactly according to the steps presented. But I-LEARN provides a basic structure for getting the most learning from an encounter in the information-rich environment provided by museums, and adopting it as a tool can help users maximize their experience. Using it as a checklist to guide a trip to an exhibit can alert learners to ways to enhance their learning, and even using it as an after-the-experience reminder can help them consolidate that learning.

The “World Wide Web” example is also somewhat artificial in that it describes the process of learning with information in a linear, dispassionate way. When a user’s “activation” is the result of a troubling event like the diagnosis of a major disease, his or her pursuit of information about the disease is likely to be more random than systematic. In instances like this, I-LEARN might also be more useful after the fact, as a checklist to assure the information seeker that his or her information gathering has covered all the appropriate steps and led to warranted conclusions.

Both Fig. 6.4 and Fig. 6.5 offer suggestions for applying I-LEARN to information-based learning in informal situations. Not every situation, obviously, lends itself fully to this approach: a cell-phone tour of Philadelphia’s Elfreth’s Alley, for example, might be more rewarding if a learner tried to understand the variety of concepts presented—architecture, waves of immigration, varieties of occupations, etc.—rather than focusing on only one problem or question. Even here, however, I-LEARN might prove useful for helping a visitor make conscious use of the ­information at hand to develop meaning from the experience—sorting out the ideas presented in order to generate a personal interpretation to add to his or her store of knowledge and assessing the degree to which his or her learning fulfilled an interest or need. Above all, having I-LEARN available as a tool in such experiences will reinforce a habit of mind that sees the world itself as an information-rich environment and even everyday experiences as valuable opportunities for learning.

6.6 Conclusion

Over several preceding decades, the belief that the essence of learning could be captured in both broad and narrow outcome statements became rooted in the educational establishment. The outcomes-instruction-assessment continuum envisioned in the 1950s evolved over the years into an approach that often granted assessment independence from its pedagogical roots and elevated it to a high-stakes arbiter both of what students must learn in their schools and of what additional formal learning they could pursue after graduation.

Until recently, discussions of the role of assessment in learning with information have been largely peripheral—as parents, educators, students, and governments have focused on gauging students’ mastery of traditional subject-matter skills. While that focus is sure to continue, recent developments suggest that society is beginning to understand the importance of specifying the knowledge and skills involved in using information as a tool for learning and, subsequently, of designing assessments to address those outcomes. Against this backdrop, the question arises of how to assess learners’ abilities to use information for learning across a variety of information-rich environments, both formal and informal, and how to design those assessments as pedagogical tools as well as tools for determining mastery of the processes and outcomes of learning.

Thinking about the I-LEARN model as a framework for designing assessments yields a variety of general ideas as well as some specific tools that could serve both learning and assessment. I-LEARN’s grounding in contemporary learning theory and in Anderson and Krathwohl’s (2001) recent update of Bloom et al.’s Taxonomy of Educational Objectives (1956) bridges the old and the new to suggest both instructional approaches and ways to design formative and summative evaluations to assess both the process and the outcomes of learning.

Of particular interest in formal environments is I-LEARN’s potential as a pedagogical tool that links learning and assessment (see Phillips and Wong 2010). Its iterative, process-based nature provides a mechanism both for guiding students through the process of learning with information and for ascertaining their understanding of the entire process as well as of its various components. While each of its stages is discrete enough to allow assessment, its special value at this point in formal education might well be the support it provides for helping teachers and students move formatively from stage to stage. Using the model to help learners build upon, correct, and expand their understanding—a la Socrates in Athens—holds promise for helping students truly understand how to use information as a tool for learning.

Unlike assessment in formal learning environments, assessment in informal learning environments is always formative, never summative. Its purpose is solely to evaluate one’s own learning and to improve it as much as possible. There is no Socrates sitting in the stoa with informal learners, guiding their progress, but his shade hovers over such learners as they continually question their own understanding and make efforts to improve their knowledge. Using I-LEARN as a self-directed learning tool and for self-assessment can help learners gain the most from their experiences in all the information-rich environments that present themselves as opportunities for learning.


  1. American Association of School Librarians and Association for Educational Communications and Technology (1998). Information power: Building partnerships for learning. Chicago: ALA Editions.Google Scholar
  2. Anderson, L.W., & Krathwohl, D. R. (Eds.) (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York: Addison Wesley Longman.Google Scholar
  3. Association of College and Research Libraries (2008). Information literacy standards for anthropology and sociology students. Available at http://www.ala.org/ala/mgrps/divs/acrl/standards/infolit.cfm
  4. Association of College and Research Libraries (2006). Information literacy standards for science and technology. Available at http://www.ala.org/ala/mgrps/divs/acrl/standards/infolit.cfm
  5. Association of College and Research Libraries (2001). Objectives for information literacy instruction: A model statement for academic librarians. Available at http://www.ala.org/ala/mgrps/divs/acrl/standards/infolit.cfm
  6. Association of College and Research Libraries (2007). Research competency guidelines for literatures in English. Available at http://www.ala.org/ala/mgrps/divs/acrl/standards/infolit.cfm
  7. Bloom, B. S. (Ed). (1956). Taxonomy of educational objectives: Cognitive domain. New York: Longman.Google Scholar
  8. Brasington, D., & Haurin, D. R. (2006). Educational outcomes and house values: A test of the value added approach. Journal of Regional Science, 56, 245–268.CrossRefGoogle Scholar
  9. Brasington, D., & Haurin, D. R. (2009). Parents, peers, or school inputs: Which components of school outcomes are capitalized into house value? Regional Science and Urban Economics, 39(5), 523–529.CrossRefGoogle Scholar
  10. Briggs, L. J. (1977). Instructional design: Principles and applications. Englewood Cliffs, NJ: Educational Technology Publications.Google Scholar
  11. Educational Testing Service (2003). Succeeding in the 21 st century: What higher education must do to address the gap in information and communication technology proficiencies. Available at http://www.ets.org/
  12. Eisenberg, M. B., Lowe, C. A., & Spitzer, K. L. (2004). Information literacy: Essential skills for the information age. Westport, CN: Libraries Unlimited.Google Scholar
  13. Harada, V. H., & Yoshina, J. M. (2005). Assessing learning: Librarians and teachers as partners. Westport, CN: Libraries Unlimited.Google Scholar
  14. Haurin, D. R., & Brasington, D. (1996). School quality and real house process: Intra- and interjurisdictional effects. Journal of Housing Economics, 5(4), 351–368.CrossRefGoogle Scholar
  15. International Society for Technology in Education. (1998, 2007). National education technology standards for students. Available at http://www.iste.org
  16. International Society for Technology in Education. (2000, 2008). National education technology standards for teachers. Available at http://www.iste.org
  17. International Society for Technology in Education. (2002, 2009). National education technology standards for administrators. Available at http://www.iste.org
  18. Katz, I. R. (2005). Beyond technical competence: Literacy in information and communication technology. White paper for the Educational Testing Service. Available at http://www.ets.org/
  19. Katz, I. R. (2007). Testing information literacy in digital environments: ETS’s iSkills assessment. White paper for the Educational Testing Service. Available at http://www.ets.org/
  20. Kendall, J. S., & Marzano, R. J. (2000). Content knowledge: A compendium of standards and benchmarks for K-12 education (2nd ed.). Aurora, CO, and Alexandria, VA: Mid-continent Regional Education Laboratory and Association for Supervision and Curriculum Development.Google Scholar
  21. Kuhlthau, C. C. (1987). Information skills for an information society: A review of research. Syracuse, NY: ERIC Clearinghouse on Information Resources.Google Scholar
  22. Loertscher, D. V., & Wools, B. (2002). Information literacy: A review of the research. San Jose, CA: Hi Willow.Google Scholar
  23. Mager, R. F. (1962) Preparing objectives for programmed instruction. Belmont, CA: Fearron.Google Scholar
  24. Morrison, G. R., Ross, S. M., & Kemp, J. E. (2004). Designing effective instruction (4th ed.). New York: Wiley.Google Scholar
  25. Neuman, D. (2000). Information Power and assessment: The other side of the standards coin. In R. M. Branch & M.A. Fitzgerald (Eds.). Educational media and technology yearbook 2000 (pp. 110–119). Englewood, CO: Libraries Unlimited.Google Scholar
  26. Partnership for 21st Century Skills (2003). Learning for the 21 st century: A report and MILE guide for 21 st century skills. Available at www.21stcenturyskills.org
  27. Partnership for 21st Century Skills (2004). Framework for 21 st century learning. Available at www.21stcenturyskills.org
  28. Partners in Education Transformation Project (2009). Assessment call to action. Available at http://www.microsoft.com/education/programs/transformation.mspx
  29. Phillips, V., & Wong, C. (2010). Tying together the common core of standards, instruction, and assessment. Phi Delta Kappan, 91(5), 37–42.Google Scholar
  30. Salomon, G. (1979). Interaction of meaning, cognition, and learning. An exploration of how symbolic forms cultivate mental skills and affect knowledge acquisition. San Francisco: Jossey-Bass.Google Scholar
  31. Strickland, K., & Strickland, J. (2000). Making assessment elementary. Portsmouth, NH: Heinemann.Google Scholar
  32. Tyler, L. (2005). ICT literacy: Equipping students to succeed in an information-rich, technology-based society. White paper for the Educational Testing Service. Available at http://www.ets.org/
  33. Wiggins, G., & McTighe, J. (1998, 2005). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.College of Information Science and TechnologyDrexel UniversityPhiladelphiaUSA

Personalised recommendations