Identifying and Pruning Features for Classifying Translated and Post-edited Gaze Durations

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10089)

Abstract

The present paper reports on various experiments carried out to classify the source and target gaze fixation durations on an eye tracking dataset, namely Translation Process Research (TPR). Different features were extracted from both the source and target parts of the TPR dataset, separately and different models were developed separately by employing such features using a machine learning framework. These models were trained using Support Vector Machine (SVM) and the best accuracy of 49.01% and 59.78% were obtained with respect to cross validation for source and target gaze fixation durations, respectively. The experiments were also carried out on the post edited data set using same experimental set up and the highest accuracy of 71.70% was obtained. Finally, Information Gain based pruning has been performed in order to select the best features that are useful for classifying the gaze durations.

Keywords

Eye tracking Gaze fixation duration Translation Post-editing Information gain 

References

  1. 1.
    Carl, M.: Translog-II: a program for recording user activity data for empirical reading and writing research. In: Proceedings of the Eight International Conference on Language Resources and Evaluation, European Language Resources Association (ELRA) (2012)Google Scholar
  2. 2.
    Jakobsen, A.L., Schou, L.: Translog documentation. In: Probing the Process in Translation: Methods and Results. Copenhagen Studies in Language Series, vol. 24, pp. 1–36 (1999)Google Scholar
  3. 3.
    Carl, M., Schaeffer, M.J.: The CRITT translation process research database V1.4. In: The Bridge: Research Platform. Department of International Business Communication (IBC). Copenhagen Business School (2014)Google Scholar
  4. 4.
    Carl, M.: The CRITT TPR-DB 1.0: a database for empirical human translation process research. In: AMTA 2012 Workshop on Post-Editing Technology and Practice (WPTP-2012) (2012)Google Scholar
  5. 5.
    Prasov, Z., Chai, J.Y.: What’s in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces. In: Proceedings of the 13th International Conference on Intelligent User Interfaces, pp. 20–29 (2008)Google Scholar
  6. 6.
    Meyer, A.S., et al.: Viewing and naming objects: eye movements during noun phrase production. Cognition 66, B25–B33 (1998)CrossRefGoogle Scholar
  7. 7.
    Rayner, E.D., et al.: Toward a model of eye movement control in reading. Psychol. Rev. 105, 125–157 (1998)CrossRefGoogle Scholar
  8. 8.
    Rayner, K., McConkie, G.W.: What guides a reader’s eye movements? Vis. Res. 16(8), 829–837 (1976)CrossRefGoogle Scholar
  9. 9.
    Jakobsen, A.L., Jensen, K.T.H.: Looking at Eyes Eye-Tracking Studies of Reading and Translation Processing, vol. 36, pp. 103–124. Samfundslitteratur, Copenhagen (2008)Google Scholar
  10. 10.
    Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, Burlington (2005)MATHGoogle Scholar
  11. 11.
    Rayner, K., McConkie, G.W.: What guides a reader’s eye movements? Vis. Res. 16, 829–837 (1976)CrossRefGoogle Scholar
  12. 12.
    Inhoff, A.W., Rayner, K.: Parafoveal word processing during eye fixations in reading: effects of word frequency. Percept. Psychophys. 40(6), 431–439 (1986)CrossRefGoogle Scholar
  13. 13.
    Carl, M., et al.: D1.3: Final report on user interface studies, cognitive and user modelling (2014)Google Scholar
  14. 14.
    Saikh, T., Bangalore, S., Carl, M., Bandyopadhyay, S.: Predicting source gaze fixation duration: a machine learning approach. In: International Conference on Cognitive Computing and Information Processing. IEEE (2015)Google Scholar
  15. 15.
    Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Google Scholar
  16. 16.
    Sedgwick, P., et al.: Pearson’s correlation coefficient. BMJ 345, e4483 (2012)CrossRefGoogle Scholar
  17. 17.
    Solorio, T., Liu, Y.: Part-of-speech tagging for English-Spanish code-switched text. In: Empirical Methods on Natural Language Processing, EMNLP-2008, Honolulu, Hawaii, pp. 1051–1060, October 2008Google Scholar
  18. 18.
    Li, Y., et al.: Learning to Predict Gaze in Egocentric Video. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV 2013), pp. 3216–3223 (2013)Google Scholar
  19. 19.
    Prasov, Z., Chai, J.Y.: What’s in a gaze?: the role of eyegaze in reference resolution in multimodal conversational interfaces. In: Proceedings of the 13th International Conference on Intelligent User Interfaces, pp. 20–29 (2008)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Tanik Saikh
    • 1
  • Dipankar Das
    • 1
  • Sivaji Bandyopadhayay
    • 1
  1. 1.Department of Computer Science and EngineeringJadavpur UniversityKolkataIndia

Personalised recommendations