Advertisement

An Application of Rule-Induction Based Method in Psychological Measurement for Application in HCI Research

  • Maria RafalakEmail author
  • Piotr Bilski
  • Adam Wierzbicki
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10047)

Abstract

The paper presents a novel approach for creating computer adaptive version of traditional psychological tests that uses the rule induction algorithm. Currently used measures of the specific features (such as the intelligence) are based on questionnaires. Their computer versions should be short and non-repeatable. Because established methods for the computer adaptive tests show drawbacks, there is the need to propose new approaches to solve them. The proposed method uses the rule induction algorithm to generate rules for the training data, which can then be used to determine the importance of subsequent test items. They are then partitioned into groups, which allows for generating the curtailed version of the questionnaire, avoiding its repeatability. Verification results show that the proposed method significantly reduce the number of test items (by about one fifth) with relatively little loss of diagnostic accuracy.

Keywords

Computer adaptive testing (CAT) Rules induction Features reduction Artificial intelligence (AI) 

References

  1. 1.
    Kaptein, M., Markopoulos, P., de Ruyter, B., Aarts, E.: Personalizing persuasive technologies: explicit and implicit personalization using persuasion profiles. Int. J. Hum Comput Stud. 77, 38–51 (2015)CrossRefGoogle Scholar
  2. 2.
    Ziemkiewicz, C., Ottley, A., Crouser, R.J., Chauncey, K., Su, S.L., Chang, R.: Understanding visualization by understanding individual users. IEEE Comput. Graph. Appl. 32(6), 88–94 (2012)CrossRefGoogle Scholar
  3. 3.
    Schneider, N., Schreiber, S., Wilkes, J., Grandt, M., Schlick, C.M.: Foundations of an age-differentiated adaptation of the human-computer interface. Behav. Inf. Technol. 27(4), 319–324 (2008)CrossRefGoogle Scholar
  4. 4.
    Mussi, S.: User profiling on the web based on deep knowledge and sequential questioning. Expert Syst. 23(1), 21–38 (2006)CrossRefGoogle Scholar
  5. 5.
    Purkait, S., De Kumar, S., Suar, D.: An empirical investigation of the factors that influence Internet user’s ability to correctly identify a phishing website. Inf. Manage. Comput. Secur. 22(3), 194–234 (2014)Google Scholar
  6. 6.
    Youyou, W., Kosinski, M., Stillwell, D.: Computer-based personality judgments are more accurate than those made by humans. Proc. Natl. Acad. Sci. 112(4), 1036–1040 (2015)CrossRefGoogle Scholar
  7. 7.
    Nunes, M.A.S.: Towards to psychological-based recommenders systems: a survey on recommender systems. Sci. Plena 6(8) (2010)Google Scholar
  8. 8.
    Tzouveli, P., Mylonas, P., Kollias, S.: An intelligent e-learning system based on learner profiling and learning resources adaptation. Comput. Educ. 51(1), 224–238 (2008)CrossRefGoogle Scholar
  9. 9.
    Levy, J.C.: Adaptive Learning and the Human Condition. Routledge (2015)Google Scholar
  10. 10.
    Wong, L.H., Looi, C.K.: Swarm intelligence: new techniques for adaptive systems to provide learning support. Interact. Learn. Environ. 20(1), 19–40 (2012)CrossRefGoogle Scholar
  11. 11.
    Oxman, S., Wong, W.: White Paper: Adaptive Learning Systems. Integrated Education Solutions (2014)Google Scholar
  12. 12.
    Riva, G., Vatalaro, F., Davide, F., Alcañiz, M.: 2 The Psychology of Ambient Intelligence: Activity, Situation and Presence (2005)Google Scholar
  13. 13.
    Novick, M.R.: The axioms and principal results of classical test theory. J. Math. Psychol. 3(1), 1–18 (1966)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Lord, F.M., Novick, M.R., Birnbaum, A.: Statistical Theories of Mental Test Scores (1968)Google Scholar
  15. 15.
    Adams, R.L., Smigielski, J., Jenkins, R.L.: Development of a Satz-Mogel short form of the WAIS—R. J. Consult. Clin. Psychol. 52(5), 908 (1984)CrossRefGoogle Scholar
  16. 16.
    Kaufman, A.S.: A short form of the Wechsler Preschool and Primary Scale of Intelligence. J. Consult. Clin. Psychol. 39(3), 361 (1972)CrossRefGoogle Scholar
  17. 17.
    Nunnally, J.C., Bernstein, I.H., Berge, J.M.T.: Psychometric theory, vol. 226. McGraw-Hill, New York (1967)Google Scholar
  18. 18.
    Coste, J., Guillemin, F., Pouchot, J., Fermanian, J.: Methodological approaches to shortening composite measurement scales. J. Clin. Epidemiol. 50(3), 247–252 (1997)CrossRefGoogle Scholar
  19. 19.
    van der Linden, W.J.: Constrained adaptive testing with shadow tests. In: van der Linden, W.J., Glas, C.A.W. (eds.) Computer-Adaptive Testing: Theory and Practice, pp. 27–52. Kluwer, Boston (2000)CrossRefGoogle Scholar
  20. 20.
    Yen, Y.C., Ho, R.G., Chen, L.J., Chou, K.Y., Chen, Y.L.: Development and evaluation of a confidence-weighting computerized adaptive testing. Educ. Technol. Soc. 13(3), 163–176 (2010)Google Scholar
  21. 21.
    Chang, S.H., Lin, P.C., Lin, Z.C.: Measures of partial knowledge and unexpected responses in multiple-choice tests. Educ. Technol. Soc. 10(4), 95–109 (2007)Google Scholar
  22. 22.
    Van der Linden, W.J.: Linear Models for Optimal Test Design. Springer, New York (2005). doi: 10.1007/0.387.29054.0 CrossRefzbMATHGoogle Scholar
  23. 23.
    Eggen, T.J.H.M.: Item selection in adaptive testing with the sequential probability ratio test. Appl. Psychol. Meas. 23, 249–261 (1999). doi: 10.1177/01466219922031365 CrossRefGoogle Scholar
  24. 24.
    Weissman, A.: Mutual information item selection in adaptive classification testing. Educ. Psychol. Measur. 67, 41–58 (2007). doi: 10.1177/0013164406288164 MathSciNetCrossRefGoogle Scholar
  25. 25.
    Sie, H., Finkelman, M.D., Bartroff, J., Thompson, N.A.: Stochastic curtailment in adaptive mastery testing improving the efficiency of confidence interval-based stopping rules. Appl. Psychol. Meas. 39(4), 278–292 (2015)CrossRefGoogle Scholar
  26. 26.
    Vomlel, J.: Building adaptive tests using Bayesian networks. Kybernetika 40(3), 333–348 (2004)MathSciNetzbMATHGoogle Scholar
  27. 27.
    Tselios, N., Stoica, A., Maragoudakis, M., Avouris, N., Komis, V.: Enhancing user support in open problem solving environments through Bayesian network inference techniques. Educ. Technol. Soc. 9(4), 150–165 (2006)Google Scholar
  28. 28.
    Morro, C.G.: Different approaches to computer adaptive testing applications. http://cs.joensuu.fi/pages/whamalai/sciwri/cristina.pdf
  29. 29.
    Liu, C.L.: Using mutual information for adaptive item comparison and student assessment. Educ. Technol. Soc. 8(4), 100–119 (2005)Google Scholar
  30. 30.
    Lu, P., Cong, X., Zhou, D.: The research on web-based testing environment using simulated annealing algorithm. Sci. World J. (2014)Google Scholar
  31. 31.
    Šerbec, I. N., Žerovnik, A., Rugelj, J.: Adaptive Assessment Based on Machine Learning Technology. http://www.icl-conference.org/dl/proceedings/2008/finalpaper/Contribution336_a.pdf
  32. 32.
    Yan, D., Lewis, C., von Davier, A.A.: A tree-based approach for multistage testing. In: Yan, D., von Davier, A.A., Lewis, C. (eds.) Computerized Multistage Testing: Theory and Applications, pp. 169–188. Chapman and Hall/CRC, Boca Raton (2014)Google Scholar
  33. 33.
    Bilski, P., Winiecki, W.: The rule-based method for the non-intrusive electrical appliances identification. In: 2015 IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), vol. 1, pp. 220–225. IEEE, September 2015Google Scholar
  34. 34.
    Michalski, R.: The AQ15 inductive learning system: an overview and experiments. Reports of Machine Learning and Inference Laboratory (1986)Google Scholar
  35. 35.
    DeVellis, R.F.: Scale Development, 2 edn. Sage Publications, Thousand Oaks (2003)Google Scholar
  36. 36.
    Neuman, W.L.: Social Research Methods: Qualitative and Quantitative Approaches. Allyn & Bacon (2003)Google Scholar
  37. 37.
    Riza, L.S., Janusz, A., Bergmeir, C., Cornelis, C., Herrera, F., Śle, D., Benítez, J.M.: Implementing algorithms of rough set theory and fuzzy rough set theory in the R package “roughsets”. Inf. Sci. 287, 68–89 (2014)CrossRefGoogle Scholar
  38. 38.
    Carmines, E.G., Zeller, R.A.: Reliability and Validity Assessment, vol. 17. Sage Publications (1979)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Polish-Japanese Academy of Information SciencesWarsawPoland
  2. 2.Institute of Radioelectronics and Multimedia TechnologiesWarsaw University of TechnologyWarsawPoland

Personalised recommendations