Abstract
Standardized reading comprehension tests have been criticized for their inability to assess what a reader has gleaned from the passage during the reading process (Tuinman, 1973). Tuinman reported scores as high as 65 percent correct on multiple-choice comprehension tests which subjects answered without having had access to the passages on which the questions were based. Tuinman’s paper, among others, has caused reading researchers to focus on the distinction between comprehension and information gain (IG). In general, the concept of IG involves comparing the reader’s state of knowledge before and after reading the test passage. One method of assessing IG, suggested by Coleman and Miller (1967), is to obtain percent correct cloze scores before and after the subjects have had an opportunity to read the test passage in its original, unmutilated form; the difference between the posttest scores and the pretest scores is considered to be the IG score.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Bayes, T. 1763. Essay towards solving a problem in the doctrine of chances. Philosophical Transactions 53, 370–418. (Reprinted in Biometrika, 1958, 45, 293-315.).
Brennan, R. L. 1972. A generalized upper-lower item-discrimination index. Educational and Psychological Measurement 32, 289–303.
Brennan, R. L. and L. M. Stolurow. 1971. An empirical decision process for formative evaluation. Research Memorandum No. 4. Cambridge, MA: Harvard University CAI Laboratory.
Coleman, E. B. and G. R. Miller. 1967. A measure of information gained during prose learning. Reading Research Quarterly 3, 369–386.
Cox, R. C. and J. Vargas. 1966. A comparison of item selection techniques for norm-referenced and criterian-referenced tests. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago.
Haladyna, T. and G. Roid. 1981. The role of instructional sensitivity in the empirical review of criterion-referenced items. Journal of Educational Measurement 18, 39–52.
Helmstadter, G. C. 1974. A comparison of Bayesian and traditional indexes of test item effectiveness. Paper presented at the annual meeting of the National Council of Measurement in Education, Chicago.
Iversen, G. R. 1984. Bayesian statistical inference. (Sage University paper No. 43. Series; Quantitative Applications in the Social Sciences). Beverly Hills, CA: Sage Publications, Inc.
Popham, W. J. 1971. Indices of adequacy for criterion-referenced tests. In Popham, W. J., editor, Criterion-referenced measurement. Englewood Cliffs, NJ: Educational Technology Publications.
Roid, G. and T. Haladyna. 1982. A technology for test-item writing. New York: Academic Press.
Tuinman, J. J. 1973. Determining the passage difficulty of comprehension questions in 5 major tests. Reading Research Quarterly 9, 206–223.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1990 Springer Science+Business Media New York
About this chapter
Cite this chapter
Perkins, K., Hunsaker, W.N. (1990). A Comparison of Bayesian and Traditional Indices for Measuring Information Gain Sensitivity in a Cloze Test. In: Arena, L.A. (eds) Language Proficiency. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-0870-4_17
Download citation
DOI: https://doi.org/10.1007/978-1-4899-0870-4_17
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-0872-8
Online ISBN: 978-1-4899-0870-4
eBook Packages: Springer Book Archive