Identifying and Pruning Features for Classifying Translated and Post-edited Gaze Durations
The present paper reports on various experiments carried out to classify the source and target gaze fixation durations on an eye tracking dataset, namely Translation Process Research (TPR). Different features were extracted from both the source and target parts of the TPR dataset, separately and different models were developed separately by employing such features using a machine learning framework. These models were trained using Support Vector Machine (SVM) and the best accuracy of 49.01% and 59.78% were obtained with respect to cross validation for source and target gaze fixation durations, respectively. The experiments were also carried out on the post edited data set using same experimental set up and the highest accuracy of 71.70% was obtained. Finally, Information Gain based pruning has been performed in order to select the best features that are useful for classifying the gaze durations.
KeywordsEye tracking Gaze fixation duration Translation Post-editing Information gain
The research work has received funding from the project “Development of Tree Bank in Indian Languages (TBIL)” funded by The Department of Electronics and Information Technology (DeitY), Ministry of Communication and Information Technology, Government of India.
- 1.Carl, M.: Translog-II: a program for recording user activity data for empirical reading and writing research. In: Proceedings of the Eight International Conference on Language Resources and Evaluation, European Language Resources Association (ELRA) (2012)Google Scholar
- 2.Jakobsen, A.L., Schou, L.: Translog documentation. In: Probing the Process in Translation: Methods and Results. Copenhagen Studies in Language Series, vol. 24, pp. 1–36 (1999)Google Scholar
- 3.Carl, M., Schaeffer, M.J.: The CRITT translation process research database V1.4. In: The Bridge: Research Platform. Department of International Business Communication (IBC). Copenhagen Business School (2014)Google Scholar
- 4.Carl, M.: The CRITT TPR-DB 1.0: a database for empirical human translation process research. In: AMTA 2012 Workshop on Post-Editing Technology and Practice (WPTP-2012) (2012)Google Scholar
- 5.Prasov, Z., Chai, J.Y.: What’s in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces. In: Proceedings of the 13th International Conference on Intelligent User Interfaces, pp. 20–29 (2008)Google Scholar
- 9.Jakobsen, A.L., Jensen, K.T.H.: Looking at Eyes Eye-Tracking Studies of Reading and Translation Processing, vol. 36, pp. 103–124. Samfundslitteratur, Copenhagen (2008)Google Scholar
- 13.Carl, M., et al.: D1.3: Final report on user interface studies, cognitive and user modelling (2014)Google Scholar
- 14.Saikh, T., Bangalore, S., Carl, M., Bandyopadhyay, S.: Predicting source gaze fixation duration: a machine learning approach. In: International Conference on Cognitive Computing and Information Processing. IEEE (2015)Google Scholar
- 15.Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Google Scholar
- 17.Solorio, T., Liu, Y.: Part-of-speech tagging for English-Spanish code-switched text. In: Empirical Methods on Natural Language Processing, EMNLP-2008, Honolulu, Hawaii, pp. 1051–1060, October 2008Google Scholar
- 18.Li, Y., et al.: Learning to Predict Gaze in Egocentric Video. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV 2013), pp. 3216–3223 (2013)Google Scholar
- 19.Prasov, Z., Chai, J.Y.: What’s in a gaze?: the role of eyegaze in reference resolution in multimodal conversational interfaces. In: Proceedings of the 13th International Conference on Intelligent User Interfaces, pp. 20–29 (2008)Google Scholar