Advertisement

Bayesian Networks

  • Russell G. AlmondEmail author
  • Juan-Diego Zapata-Rivera
Chapter
Part of the Methodology of Educational Measurement and Assessment book series (MEMA)

Abstract

Bayesian networks (or Bayes nets) are a notation for expressing the joint distribution of probabilities over a number of variables. Variables in a Bayesian network can be continuous or discrete (Lauritzen SL, Graphical models. Oxford University Press, New York, 1996), however, when all variables are discrete, all calculations can be represented as a series of sums and products. As such, Bayes nets provide a notation for expressing a wide variety of cognitive diagnostic models, including ones described in other chapters of this book. Several commercial software packages are available for supporting Bayesian networks including HUGIN (Andersen SK, Olesen KG, Jensen FV, Jensen F, Hugin—a shell for building Bayesian belief universes for expert systems. In: IJCAI’89, Detroit, MI, 1989. Reprinted in Shafer and Pearl 1990), Netica (Norsys, Inc., Netica [Computer software manual], 2004. Retrieved from http://www.norsys.com), Genie (BayesFusion, LLC, Genie, 2017. Retrieved from http://bayesfusion.com (Bayesian network Computer Software)) and BayesiaLab (Bayesia, Bayesialab, 2017. Retrieved from http://www.bayesia.com (Bayesian Network Computer Software)) as well as a number of free software packages.

References

  1. Almond, R. G. (2010). I can name that Bayesian network in two matrixes. International Journal of Approximate Reasoning, 51, 167–178. Retrieved from https://doi.org/10.1016/j.ijar.2009.04.005 CrossRefGoogle Scholar
  2. Almond, R. G. (2015). An IRT-based parameterization for conditional probability tables. In J. M. Agosta & R. N. Carvalho (Eds.), Bayesian Modelling Application Workshop at the Uncertainty in Artificial Intelligence (UAI) Conference, Amsterdam, The Netherlands. Additional material available at http://pluto.coe.fsu.edu/RNetica/ Google Scholar
  3. Almond, R. G. (2017a). CPTtools: R code for constructing Bayesian networks (0–4.3 ed.) [Computer software manual]. Retrieved from http://pluto.coe.fsu.edu/RNetica/CPTtools.html (Open source software package).
  4. Almond, R. G. (2017b). Peanut: An object-oriented framekwork for parameterized Bayesian networks (0–3.2 ed.) [Computer software manual]. Retrieved from http://pluto.coe.fsu.edu/RNetica/Peanut.html (Open source software package).
  5. Almond, R. G. (2017c). RNetica: Binding the Netica API in R (0–5.1 ed.) [Computer software manual]. Retrieved from http://pluto.coe.fsu.edu/RNetica/RNetica.html (Open source software package).
  6. Almond, R. G. (2017d). Tabular views of Bayesian networks. In J. M. Agosta & T. Singliar (Eds.), Bayesian Modeling Application Workshop at the Uncertainty in Artificial Intelligence (UAI) Conference, Sydney, Australia.Google Scholar
  7. Almond, R. G., DiBello, L. V., Jenkins, F., Mislevy, R. J., Senturk, D., Steinberg, L. S., et al. (2001). Models for conditional probability tables in educational assessment. In T. Jaakkola & T. Richardson (Eds.), Artificial Intelligence and Statistics 2001 (pp. 137–143). San Francisco, CA: Morgan Kaufmann.Google Scholar
  8. Almond, R. G., Kim, Y. J., Shute, V. J., & Ventura, M. (2013). Debugging the evidence chain. In R. G. Almond & O. Mengshoel (Eds.), Proceedings of the 2013 UAI Application Workshops: Big Data Meet Complex Models and Models for Spatial, Temporal and Network Data (UAI2013AW), Aachen, Germany (pp. 1–10). Retrieved from http://ceur-ws.org/Vol-XXX/paper-01.pdf Google Scholar
  9. Almond, R. G., & Mislevy, R. J. (1999). Graphical models and computerized adaptive testing. Applied Psychological Measurement, 23, 223–238.CrossRefGoogle Scholar
  10. Almond, R. G., Mislevy, R. J., Steinberg, L. S., Yan, D., & Williamson, D. M. (2015). Bayesian networks in educational assessment. New York, NY: Springer.CrossRefGoogle Scholar
  11. Almond, R. G., Mislevy, R. J., Williamson, D. M., & Yan, D. (2007). Bayesian networks in educational assessment. Paper presented at Annual Meeting of the National Council on Measurement in Education (NCME), Chicago, IL.Google Scholar
  12. Almond, R. G., Mislevy, R. J., & Yan, D. (2007). Using anchor sets to identify scale and location of latent variables. Paper Presented at Annual Meeting of the National Council on Measurement in Education (NCME), Chicago, IL.Google Scholar
  13. Almond, R. G., Shute, V. J., Underwood, J. S., & Zapata-Rivera, J.-D. (2009). Bayesian networks: A teacher’s view. International Journal of Approximate Reasoning, 50, 450–460. https://doi.org/10.1016/j.ijar.2008.04.011 CrossRefGoogle Scholar
  14. Almond, R. G., Yan, D., Matukhin, A., & Chang, D. (2006). StatShop testing (Research Memorandum No. RM-06-04). Princeton, NJ: Educational Testing Service.Google Scholar
  15. Andersen, S. K., Olesen, K. G., Jensen, F. V., & Jensen, F. (1989). Hugin—A shell for building Bayesian belief universes for expert systems. In IJCAI’89, Detroit, MI. Reprinted in Shafer and Pearl (1990).Google Scholar
  16. Bafumi, J., Gelman, A., Park, D. K., & Kaplan, N. (2005). Practial issues in implementing and understanding Bayesian ideal point estimation. Political Analysis, 13, 171–187.  https://doi.org/10.1093/pan/mpi010 CrossRefGoogle Scholar
  17. BayesFusion, LLC. (2017). Genie. Retrieved from http://bayesfusion.com (Bayesian network Computer Software).
  18. Bayesia, S. A. S. (2017). BayesiaLab. Retrieved from http://www.bayesia.com (Bayesian Network Computer Software).
  19. Behrens, J. T., Mislevy, R. J., Bauer, M. I., Williamson, D. M., & Levy, R. (2004). Introduction to evidence centered design and lessons learned from its application in a global e-learning program. International Journal of Measurement, 4, 295–301.Google Scholar
  20. Cowell, R. G., Dawid, A. P., Lauritzen, S. L., & Spiegelhalter, D. J. (1999). Probabilistic networks and expert systems. New York, NY: Springer.Google Scholar
  21. Díez, F. J. (1993). Parameter adjustment in Bayes networks. The generalized noisy or-gate. In D. Heckerman & A. Mamdani (Eds.), Uncertainty in Artificial Intelligence: Proceedings of the 9th Conference (pp. 99–105). San Francisco, CA: Morgan-Kaufmann.CrossRefGoogle Scholar
  22. Frühwirth-Schnatter, S. (2001). Markov chain Monte Carlo estimation of classical and dynamic switching and mixture models. Journal of the American Statistical Association, 96(453), 194–209.CrossRefGoogle Scholar
  23. Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis (3rd ed.). Boca Raton, FL: Chapman and Hall. The third edition is revised and expanded and has material that the earlier editions lack.Google Scholar
  24. Graf, E. A. (2003). Designing a proficiency model and associated item models for a mathematics unit on sequences. Paper Presented at the Cross Division Math Forum, Princeton, NJ.Google Scholar
  25. Jensen, F. V. (1996). An introduction to Bayesian networks. New York, NY: Springer.Google Scholar
  26. Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258–272.CrossRefGoogle Scholar
  27. Kim, J. H., & Pearl, J. (1983). A computational model for causal and diagnostic reasoning in inference systems. In Proceedings of the 8th International Joint Conference on Artificial Intelligence (pp. 190–193). Karlsruhe, Germany: William Kaufmann.Google Scholar
  28. Lauritzen, S. L. (1996). Graphical models. New York, NY: Oxford University Press.Google Scholar
  29. Levy, R., & Mislevy, R. J. (2004). Specifying and refining a measurement model for a simulation-based assessment. International Journal of Measurement, 4, 333–369.Google Scholar
  30. Li, Z., & D’Ambrosio, B. (1994). Efficient inference in Bayes nets as a combinatorial optimization problem. International Journal of Approximate Reasoning, 11, 55–81.CrossRefGoogle Scholar
  31. Lunn, D. J., Spiegelhalter, D. J., Thomas, A., & Best, N. G. (2009). The BUGS project: Evolution, critique and future directions (with discussion). Statistics in Medicine, 28, 3049–3082.CrossRefGoogle Scholar
  32. Madigan, D., & Almond, R. G. (1995). Test selection strategies for belief networks. In D. Fisher & H. J. Lenz (Eds.), Learning from data: AI and statistics V (pp. 89–98). New York, NY: Springer.Google Scholar
  33. Madigan, D., Mosurski, K., & Almond, R. G. (1997). Graphical explanation in belief networks. Journal of Computational Graphics and Statistics, 6(2), 160–181. Retrieved from http://www.amstat.org/publications/jcgs/index.cfm?fuseaction=madiganjun Google Scholar
  34. Martin, J., & VanLehn, K. (1994). Discrete factor analysis: Learning hidden variables in Bayesian networks Technical report No. LRDC-ONR-94-1. LRDC, University of Pittsburgh.Google Scholar
  35. Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational assessment (with discussion). Measurement: Interdisciplinary Research and Perspective, 1(1), 3–62.Google Scholar
  36. Muraki, E. (1992). A generalized partial credit model: Application of an em algorithm. Applied Psychological Measurement, 16(2), 159–176. https://doi.org/10.1177/014662169201600206 CrossRefGoogle Scholar
  37. Neapolitan, R. E. (2004). Learning Bayesian networks. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  38. Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
  39. Norsys, Inc. (2004). Netica [Computer software manual]. Retrieved from http://www.norsys.com Google Scholar
  40. Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Mateo, CA: Morgan Kaufmann.Google Scholar
  41. Plummer, M. (2012). JAGS version 3.2.0 user manual (3.2.0 ed.) [Computer software manual]. Retrieved from http://mcmc-jags.sourceforge.net/
  42. R Development Core Team. (2007). R: A language and environment for statistical computing [Computer software manual], Vienna. Retrieved from http://www.R-project.org
  43. Rupp, A. A., Templin, J. L., & Henson, R. A. (2010). Diagnostic measurement: Theory, methods, and applications. New York, NY: Guilford Press.Google Scholar
  44. Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. Psychometrika Monograph No. 17, 34(4), (Part 2).Google Scholar
  45. Scalise, K., Bernbaum, D. J., Timms, M., Harrell, S. V., Burmester, K., Kennedy, C. A., et al. (2007). Adaptive technology for e-learning: Principles and case studies of an emerging field. Journal of the American Society for Information Science and Technology, 58(14), 2295–2309. Retrieved from  https://doi.org/10.1002/asi.20701 CrossRefGoogle Scholar
  46. Shute, V. J., Graf, E. A., & Hansen, E. G. (2005). Designing adaptive, diagnostic math assessments for individuals with and without visual disabilities. In L. M. Pytlikzillig, R. H. Bruning, & M. Bodvarsson (Eds.), Technology-based education: Bringing researchers and practitioners together (pp. 169–202). Charlotte, NC: Information Age Publishing.Google Scholar
  47. Shute, V. J., Hansen, E. G., & Almond, R. G. (2007). An assessment for learning system called ACED: The impact of feedback and adaptivity on learning. Research report No. RR-07-26. Princeton, NJ: Educational Testing Service. Retrieved from http://www.ets.org/research/researcher/RR-07-26.html Google Scholar
  48. Shute, V. J., Hansen, E. G., & Almond, R. G. (2008). You can’t fatten a hog by weighing it—Or can you? Evaluating an assessment for learning system called ACED. International Journal of Artificial Intelligence in Education, 18(4), 289–316. Retrieved from http://www.ijaied.org/iaied/ijaied/abstract/Vol_18/Shute08.html Google Scholar
  49. Sinharay, S., & Almond, R. G. (2007). Assessing fit of cognitively diagnostic models—A case study. Educational and Psychological Measurement, 67(2), 239–257.CrossRefGoogle Scholar
  50. Sinharay, S., Almond, R. G., & Yan, D. (2004). Assessing fit of models with discrete proficiency variables in educational assessment. Research Report No. RR-04-07. Princeton, NJ: Educational Testing Service. Retrieved from http://www.ets.org/research/researcher/RR-04-07.html
  51. Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & van der Linde, A. (2002). Bayesian measures of model complexity and fit (with discussion). Journal of the Royal Statistical Society (Series B), 64, 583–639.CrossRefGoogle Scholar
  52. Spiegelhalter, D. J., & Lauritzen, S. L. (1990). Sequential updating of conditional probabilities on directed graphical structures. Networks, 20, 579–605.CrossRefGoogle Scholar
  53. Srinivas, S. (1993). A generalization of the noisy-or model, the generalized noisy or-gate. In D. Heckerman & A. Mamdani (Eds.), Uncertainty in Artificial Intelligence: Proceedings of the 9th Conference (pp. 208–215). San Mateo, CA: Morgan Kaufmann.CrossRefGoogle Scholar
  54. Wainer, H., Veva, J. L., Camacho, F., Reeve, B. B., III, Rosa, K., Nelson, L., et al. (2001). Augmented scores—“borrowing strength” to compute scores based on a small number of items. In D. Thissen & H. Wainer (Eds.), Test scoring (pp. 343–388). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  55. Williamson, D. M., Bauer, M. I., Steinberg, L. S., Mislevy, R. J., & DeMark, S. F. (2004). Design rationale for a complex performance assessment. International Journal of Testing, 4, 303–332.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Educational Psychology and Learning SystemsFlorida State UniversityTallahasseeUSA
  2. 2.Educational Testing ServicePrincetonUSA

Personalised recommendations