Analysis of Student Feedback by Ranking the Polarities

  • Thenmozhi Banan
  • Shangamitra Sekar
  • Judith Nita Mohan
  • Prathima Shanthakumar
  • Saravanakumar Kandasamy
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 380)

Abstract

Feedbacks in colleges and universities are often taken by means of online polls, OMR sheets, and so on. These methods require Internet access and are machine dependent. But feedbacks through SMS can be more efficient due to its flexibility and ease of usage. However, reliability of these text messages is a matter of concern in terms of accuracy, so we introduce the concept of text preprocessing techniques which includes tokenization, parts of speech (POS), sentence split, lemmatization, gender identification, true case, named entity recognition (NER), parse, conference graph, regular expression NER, and sentiment analysis to improve more accurate results and giving importance even to insignificant details in the text. Our experimental analysis on sentiment trees and ranking of feedbacks produces exact polarities to an extent. By this way, we can determine better feedback results that can be supplied to the faculty to enhance their teaching process.

Keywords

Sentiment analysis Feedback analysis Polarity calculation Ranking 

References

  1. 1.
    Forster, F., Hounsell, D., Thompson, S.: Handbook on Tutoring and Demonstrating. University of Edinburgh, London (1995)Google Scholar
  2. 2.
  3. 3.
  4. 4.
    Leong, C.K., Lee, Y.H., Mak, W.K.: Mining sentiments in SMS texts for teaching evaluation. Expert Syst. Appl. 39, 2584–2589 (2012)CrossRefGoogle Scholar
  5. 5.
    Zhang, L., Wang, X., Zhang, L., Chen, Y., Shi, Y.: Context-based knowledge discovery and its application. In: DM-IKM’12 Proceedings of the Data Mining and Intelligent Knowledge Management Work. ACM, New York, USA (2000)Google Scholar
  6. 6.
    Mostafa, M.M.: More than words: social networks’ text mining for consumer brand sentiments. Sci. Direct J. Expert Syst. Apps. 40, 4241–4251 (2014)CrossRefGoogle Scholar
  7. 7.
    Pang, B., Lee, L.: Opinion mining and sentiment analysis. ACM J. Found. Trends Info. Ret. 2, 1–135 (2008)Google Scholar
  8. 8.
    Martinez, I.P., Sanchez, F.G., Garcia, R.V.: Feature-based opinion mining through ontologies. Sci. Direct J. Expert Syst. Apps. 41, 5995–6008 (2014)CrossRefGoogle Scholar
  9. 9.
    Crammer, K., Singer, Y.: On the algorithmic implementation of multiclass kernel-based vector machines. J. Mac. Learn. Res. 2, 265–292 (2001)Google Scholar
  10. 10.
    Wang, D., Zhang, H., Liu, R., Wang, W.L.D.: T-test feature selection approach based on term frequency for text categorization. Sci. Direct J. Pat. Recogn. Lett. 45, 1–10 (2014)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Hogenboom, A., Heerschop, B., Frasincar, F., Kaymak, U., Jong, F.D.: Multi-lingual support for lexicon-based sentiment analysis guided by semantics. Sci. Direct J. Decis. Support Syst. 61, 43–53 (2014)CrossRefGoogle Scholar
  12. 12.
    Ortigosa, A., Martín, J.M., Carro, R.M.: Sentiment analysis in Facebook and its application to e-learning. Comp. Human Behav. 31, 527–541 (2014)Google Scholar
  13. 13.
    Zhou, X., Hu, Y., Guo, L.: Text categorization based on clustering feature selection. In: 2nd International Conference on Information Technology and Quantitative Management, vol. 31, pp. 398–405 (2014)Google Scholar
  14. 14.
    Amorim, R.C.: Learning feature weights for K-means clustering using the Minkowski metric. Ph. D thesis, University of London, UK (2011)Google Scholar
  15. 15.
    Dehdarbehbahania, I., Shakery, A., Faili, H.: Semi-supervised word polarity identification in resource-lean languages. Neural Net. 58, 50–59 (2014)CrossRefGoogle Scholar
  16. 16.
    Haddi, E., Liu, X., Shi, Y.: The role of text pre-processing in sentiment analysis. Int. Conf. Inf. Technol. Quant. Manage. 17, 26–32 (2013)Google Scholar
  17. 17.
  18. 18.
  19. 19.
  20. 20.
    Jiang, J., Zhai, C.X.: An empirical study of tokenization strategies for biomedical information retrieval. J. Info. Ret. 10, 341–363 (2012)CrossRefGoogle Scholar
  21. 21.
    Rendle, S., Freudenthaler, C., Gantner Z., Thieme, L.S.: BPR: Bayesian personalized ranking from implicit feedback. In: UAI’09 Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence. AUAI Press Arlington, Virginia, US (2009)Google Scholar
  22. 22.
  23. 23.
    Bird, S., Klein, E., Loper, E.: Natural language processing with python. O’Reilly Media, US (2009)MATHGoogle Scholar
  24. 24.
    Jivani, A.G.: A comparative study of stemming algorithms. J. Comp. Tech. Apps. 6, 1930–1938 (2013)Google Scholar
  25. 25.
    Ingason, A.K., Helgadottir, S., Rognvaldsson, H.L.E.: A mixed method lemmatization algorithm using a hierarchy of linguistic identities (HOLI). Adv. NLP. 5221, 205–216 (2008)Google Scholar
  26. 26.
  27. 27.
    Socher, R., Perelygin, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C.: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. Empirical Methods in Natural Language Processing. Stanford University, Stanford (2013)Google Scholar
  28. 28.
    Feedback Data sets. http://www.rottentomatoes.com

Copyright information

© Springer India 2016

Authors and Affiliations

  • Thenmozhi Banan
    • 1
  • Shangamitra Sekar
    • 1
  • Judith Nita Mohan
    • 1
  • Prathima Shanthakumar
    • 1
  • Saravanakumar Kandasamy
    • 1
  1. 1.School of Information Technology and EngineeringVIT UniversityVelloreIndia

Personalised recommendations