Skip to main content

A Syllabus-Fairness Measure for Evaluating Open-Ended Questions

  • Conference paper

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 264))

Abstract

A modularized syllabus containing weightages assigned to different units of a subject proves very useful to both teaching as well as student community. Different criteria like Bloom’s taxonomy, learning outcomes etc., have been used for evaluating the fairness of a question paper. But we have not come across any work that focuses on unit-weightages for computing the syllabus fairness. Hence in this paper we address the problem of evaluating the syllabus-fairness of open-ended questions of an examination question paper by analyzing the questions on different criteria. Text mining techniques are used to extract keywords from textual contents in the syllabus file and also in the question paper. Similarity Coefficient is used to compute the similarity between question content and syllabus content. Similarity measure is identified by computing the similarity matrix between question vectors and syllabus vectors. The similarity matrix is used as a guideline in grouping the unit-wise questions; matching its weightage against Syllabus File and evaluating the syllabus fairness of the question paper. The result of syllabus fairness evaluation can be used as a measure by the subject expert or question paper setter or question paper moderator to revise the questions of examination question paper accordingly.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Deniz, D.Z., Ersan, I.: Using an academic DSS for student, course and program assessment. In: International Conference on Engineering Education, Oslo, pp. 6B8-12–6B8-17 (2001)

    Google Scholar 

  2. Liu, J., Wu, D.: The Use of Data Mining in Contents of the Examination and the Relevance of Teaching Research. In: First International Workshop on Education Technology and Computer Science, ETCS 2009, Wuhan, Hubei, pp. 745–748 (2009), doi:10.1109/ETCS.2009.701

    Google Scholar 

  3. Jones, K.O., Harland, J.: Relationship between Examination Questions and Bloom’s Taxonomy. In: 39th ASEE/IEEE Frontiers in Education Conference, San Antonio, October 18-21, pp. 978–971 (2009) 978-1-4244-4714-5/09/$25.00 ©2009

    Google Scholar 

  4. Delavari, N., Phon-Amnuaisuk, S., Beikzadeh, M.R.: Data Mining Application in Higher Learning Institutions, Informatics in Education, Institute of Mathematics and Informatics. Vilnius 7(1), 31–54 (2008)

    Google Scholar 

  5. Kotsiantis, S.B., Pintelas, P.E.: Predicting students’ marks in Hellenic Open University. In: Fifth IEEE International Conference on Advanced Learning Technologies (ICALT 2005) (2005) 0-7695-2338-2/05 $20.00 © 2005

    Google Scholar 

  6. Ayesha, S., Mustafa, T., Sattar, A.R., Inayat Khan, M.: Data Mining Model for a better higher educational system. Information Technology Journal 5(3), 560–564 (2006) ISSN 1812-5638; Asian Network for Scientific Information

    Google Scholar 

  7. Harding, T.S., Passow, H.J., Carpenter, D.D., Finelli, C.J.: An Examination of the relationship between academic dishonesty and professional behaviour. In: 33rd ASEE/IEEE Frontiers in Education Conference (November 2003) 0-7803-7444-4/03/$17.00 2003

    Google Scholar 

  8. Matsuo, Y., Ishizuka, M.: Keyword Extraction from a Single Document using Word Co-occurrence Statistical Information. Journal on Artificial Intelligence Tools (2003)

    Google Scholar 

  9. Lui, Y., Brent, R., Calinescu, A.: Extracting significant phrases from text. In: 21st International Conference on Advanced Information Networking and Applications Workshops, AINAW 2007, Niagara Falls, Ont. (2007), doi:10.1109/AINAW.2007.180

    Google Scholar 

  10. Chang, S.-H., Lin, P.-C., Lin, Z.-C.: Measures of partial knowledge and unexpected responses in multiple-choice tests. International Forum of Educational Technology & Society (IFETS), 95–109 (2007)

    Google Scholar 

  11. Radoslav, F., Michal, R.: Networked digital technologies, CCIS, Cairo, vol. 88, pp. 883–888. Springer (2010), doi:10.1109/ISDA.2010.5687151

    Google Scholar 

  12. Konstantina, C., Maria, V.: A knowledge representation approach using fuzzy cognitive maps for better navigation support in an adaptive learning system. Chrysafiadi and Virvou Springer Plus 2, 81 (2013), http://www.springerplus.com/content/2/1/81 , doi:10.1186/2193-1801-2-81

    Google Scholar 

  13. Réka, V.: Educational ontology and knowledge testing. Electronic Journal of Knowledge Management, EJKM 5(1), 123–130 (2007)

    Google Scholar 

  14. Wong, K.M., Raghavan, V.V.: Vector space model of information retrieval: a reevaluation. In: Proceedings of the 7th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 1984, pp. 167–185. British Computer Society, Swinton (1984)

    Google Scholar 

  15. Tsinakos, A., Kazanidis, I.: Identification of Conflicting Questions in the PARES System. The International Review of Research in Open and Distance Learning 13(3) (June 2012)

    Google Scholar 

  16. Hage, H., Aimeru, E.: ICE: A system for identification of conflicts in exams. In: Proceedings of IEEE International Conference on Computer Systems and Applications, pp. 980–987 (2006)

    Google Scholar 

  17. Krathwohl, D.R.: A revision of Bloom’s taxonomy: An overview. Theory into Practice 41(4), 212–219 (2002)

    Article  Google Scholar 

  18. Paice Chris, D.: An evaluation method for stemming algorithms. In: Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 42–50 (1994)

    Google Scholar 

  19. Aizawa, A.: The feature quantity: an information-theoretic perspective of tfidf-like measures. In: Proceedings of the 23rd ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 104–111 (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dimple V. Paul .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Paul, D.V., Pawar, J.D. (2014). A Syllabus-Fairness Measure for Evaluating Open-Ended Questions. In: Thampi, S., Gelbukh, A., Mukhopadhyay, J. (eds) Advances in Signal Processing and Intelligent Recognition Systems. Advances in Intelligent Systems and Computing, vol 264. Springer, Cham. https://doi.org/10.1007/978-3-319-04960-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-04960-1_6

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-04959-5

  • Online ISBN: 978-3-319-04960-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics