Skip to main content

Advertisement

Log in

Detecting computer activities using eye-movement features

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Toward the age of smart human–computer interactions, human activity recognition becomes one of the most popular areas for promoting ambient intelligence techniques. To this end, this study aimed at selecting critical eye-movement features to build artificial intelligence models for recognizing three common user activities in front of the computer using an eye tracker. One hundred fifty students participated in this study to perform three everyday computer activities, comprising reading English journal articles, typing English sentences, and watching an English trailer video. While doing these tasks, their eye-movements were recorded using a desktop eye tracker (GP3 HD Gazepoint™ Canada). The collected data were then processed as 19 eye-movement features. Before building convolutional neural network (CNN) models for recognizing the three computer activities, three feature selection methods, comprising analysis of variance (ANOVA), extra tree classification (ETC), and mutual information (MI), were used to screen critical features. For each feature selection method, the top five and top 11 selected features were then used to build six types of CNN models. For comparison, the seventh type of CNN models were developed using all the 19 features as well. The comparison of the seven types of models showed that the models that used the top 11 features screened by the ANOVA method were superior to others, with an accuracy rate of 93.15% on average. This study demonstrates the application of feature selection methods, and an alternative means to recognize user activities in front of the computer.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Alemdar H, Ersoy C (2017) Multi-resident activity tracking and recognition in smart environments. J Ambient Intell Hum Comput 8(4):513–529

    Article  Google Scholar 

  • Almudhahka N, Nixon M, Hare J (2016) Human face identification via comparative soft biometrics. In: 2016 IEEE international conference on identity, security and behavior analysis (ISBA), IEEE, pp 1–6

  • Andreu J, Angelov P (2013) Towards generic human activity recognition for ubiquitous applications. J Ambient Intell Hum Comput 4:155–156

    Article  Google Scholar 

  • Augusto JC, Nakashima H, Aghajan H (2010) Ambient intelligence and smart environments: a state of the art. In: Handbook of ambient intelligence and smart environments. Springer, pp 3–31

  • Bennasar M, Hicks Y, Setchi R (2015) Feature selection using joint mutual information maximisation. Expert Syst Appl 42:8520–8532

    Article  Google Scholar 

  • Braunagel C, Kasneci E, Stolzmann W, Rosenstiel W (2015) Driver-activity recognition in the context of conditionally autonomous driving. In: 2015 IEEE 18th international conference on intelligent transportation systems, IEEE, pp 1652–1657

  • Bulling A, Ward JA, Gellersen H, Tröster G (2008) Robust recognition of reading activity in transit using wearable electrooculography. In: International conference on pervasive computing, Springer, pp 19–37

  • Bulling A, Ward JA, Gellersen H, Troster G (2010) Eye movement analysis for activity recognition using electrooculography. IEEE Trans Pattern Anal Mach Intell 33:741–753

    Article  Google Scholar 

  • Carver RP (1971) Pupil dilation and its relationship to information processing during reading and listening. J Appl Psychol 55:126

    Article  CAS  PubMed  Google Scholar 

  • Chen Z, Fu H, Lo W-L, Chi Z (2018) Strabismus recognition using eye-tracking data and convolutional neural networks. J Healthcare Eng 2018. https://doi.org/10.1155/2018/7692198

  • Cheng C-H, Liang R-D, Zhang J-S, Fang I-J (2014) The impact of product placement strategy on the placement communication effect: the case of a full-service restaurant. J Hospit Market Manag 23:424–444

    Google Scholar 

  • Dong B, Andrews B (2009) Sensor-based occupancy behavioral pattern recognition for energy and comfort management in intelligent buildings. In: Proceedings of building simulation, pp 1444–1451

  • Estévez PA, Tesmer M, Perez CA, Zurada JM (2009) Normalized mutual information feature selection. IEEE Trans Neural Netw 20:189–201

    Article  PubMed  Google Scholar 

  • Frutos-Pascual M, Garcia-Zapirain BJS (2015) Assessing visual attention using eye tracking sensors in intelligent cognitive therapies based on serious games. Sensors 15:11092–11117

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  • Hershey S et al. (2017) CNN architectures for large-scale audio classification. In: 2017 IEEE international conference on acoustics, speech and signal processing (icassp). IEEE, pp 131–135

  • Heryadi Y, Warnars HLHS (2017) Learning temporal representation of transaction amount for fraudulent transaction recognition using cnn, stacked lstm, and cnn-lstm. In: 2017 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom), IEEE, pp 84–89

  • Hickson S, Dufour N, Sud A, Kwatra V, Essa I (2019) Eyemotion: Classifying facial expressions in VR using eye-tracking cameras. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, pp 1626–1635

  • Hoque N, Bhattacharyya DK, Kalita JK (2014) MIFS-ND: a mutual information-based feature selection method. Expert Syst Appl 41:6371–6385

    Article  Google Scholar 

  • Hsu S-M, Kuo W-H, Kuo F-C, Liao Y-Y (2019) Breast tumor classification using different features of quantitative ultrasound parametric images. Int J Comput Assist Radiol Surg 14:623–633

    Article  PubMed  Google Scholar 

  • Huang J, White RW, Dumais S (2011) No clicks, no problem: using cursor movements to understand and improve search. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 1225–1234

  • Johnson KJ, Synovec RE (2002) Pattern recognition of jet fuels: comprehensive GC× GC with ANOVA-based feature selection and principal component analysis. Chemometr Intell Lab Syst 60:225–237

    Article  CAS  Google Scholar 

  • Keat FT, Ranganath S, Venkatesh Y (2003) Eye gaze based reading detection. In: TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region, IEEE, pp 825–828

  • Kiefer P, Giannopoulos I, Raubal M (2013) Using eye movements to recognize activities on cartographic maps. In: Proceedings of the 21st ACM SIGSPATIAL international conference on advances in geographic information systems, ACM, pp 488–491

  • Kunze K, Utsumi Y, Shiga Y, Kise K, Bulling A (2013) I know what you are reading: recognition of document types using mobile eye tracking. In: Proceedings of the 2013 international symposium on wearable computers, ACM, pp 113–116

  • Lee S-M, Yoon SM, Cho H (2017) Human activity recognition from accelerometer data using Convolutional Neural Network. In: 2017 IEEE international conference on big data and smart computing (bigcomp). IEEE, pp 131–134

  • Leung AP, Gong S (2005) Online feature selection using mutual information for real-time multi-view object tracking. In: International workshop on analysis and modeling of faces and gestures. Springer, pp 184–197

  • Li N, Calis G, Becerik-Gerber BJ (2012) Measuring and monitoring occupancy with an RFID based system for demand-driven HVAC operations. Autom Constr 24:89–99

    Article  Google Scholar 

  • Li X, Zhang Y, Marsic I, Sarcevic A, Burd RS (2016) Deep learning for rfid-based activity recognition. In: Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. ACM, pp 164–175

  • Liu Q, Wei Q, Fan S-Z, Lu C-W, Lin T-Y, Abbod MF, Shieh J-S (2012) Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery. Entropy 14:978–992

    Article  CAS  ADS  Google Scholar 

  • Liu Z-T, Wu M, Cao W-H, Mao J-W, Xu J-P, Tan G-Z (2018) Speech emotion recognition based on feature selection and extreme learning machine decision tree. Neurocomputing 273:271–280

    Article  Google Scholar 

  • Löfgren M, Witell L (2005) Kano’s theory of attractive quality and packaging. Qual Manag J 12:7–20

    Article  Google Scholar 

  • Perkowitz M, Philipose M, Fishkin K, Patterson DJ (2004) Mining models of human activities from the web. In: Proceedings of the 13th international conference on World Wide Web. ACM, pp 573–582

  • Rayner K (1998) Eye movements in reading and information processing: 20 years of research. Psychol Bull 124:372

    Article  CAS  PubMed  Google Scholar 

  • Rayner K, Chace KH, Slattery TJ, Ashby J (2006) Eye movements as reflections of comprehension processes in reading. Sci Stud Read 10:241–255

    Article  Google Scholar 

  • Schleicher R, Galley N, Briest S, Galley LJE (2008) Blinks and saccades as indicators of fatigue in sleepiness warnings: looking tired? Ergonomics 51:982–1010

    Article  CAS  PubMed  Google Scholar 

  • Sharma N, Mandal R, Sharma R, Pal U, Blumenstein M (2018) Signature and logo detection using deep CNN for document image retrieval. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, pp 416–422

  • Sheikhan M, Bejani M, Gharavian D (2013) Modular neural-SVM scheme for speech emotion recognition using ANOVA feature selection method. Neural Comput Appl 23:215–227

    Article  Google Scholar 

  • Siegle GJ, Ichikawa N, Steinhauer S (2008) Blink before and after you think: blinks occur prior to and following cognitive load indexed by pupillary responses. Psychophysiology 45:679–687

    Article  PubMed  Google Scholar 

  • Smith TJ, Whitwell M, Lee J (2006) Eye movements and pupil dilation during event perception. In: Proceedings of the 2006 symposium on eye tracking research & applications, pp 48–48

  • Soltaninejad M et al (2017) Automated brain tumour detection and segmentation using superpixel-based extremely randomized trees in FLAIR MRI. Int J Comput Assist Radiol Surg 12:183–203

    Article  PubMed  Google Scholar 

  • Staudt P, Fettke P, Loos P (2019) Enhancing Process Data In Manual Assembly Workflows. In: Business process management workshops: BPM 2018 International Workshops, Sydney, NSW, Australia, September 9–14, 2018, Revised Papers. Springer, p 269

  • Troscianko T, Meese TS, Hinde S (2012) Perception while watching movies: Effects of physical screen size and scene type. i-Perception 3:414–425

    Article  PubMed  PubMed Central  Google Scholar 

  • Uddin MT, Uddiny MA (2015) Human activity recognition from wearable sensors using extremely randomized trees. In: 2015 International conference on electrical engineering and information communication technology (ICEEICT). IEEE, pp 1–6

  • Wehenkel L, Ernst D, Geurts P (2006) Ensembles of extremely randomized trees and some generic applications. In: Proceedings of robust methods for power system state estimation and load forecasting

  • Wu J, Osuntogun A, Choudhury T, Philipose M, Rehg JM (2007) A scalable approach to activity recognition based on object use. In: 2007 IEEE 11th international conference on computer vision, IEEE, pp 1–8

  • Wyawahare MV, Patil PM (2015) Feature selection and classification for automatic detection of retinal nerve fibre layer thinning in retinal fundus images. Int J Biomed Eng Technol 19:205–219

    Article  Google Scholar 

  • Yamada Y, Kobayashi M (2018) Detecting mental fatigue from eye-tracking data gathered while watching video: Evaluation in younger and older adults. Artif Intell Med 91:39–48

    Article  PubMed  Google Scholar 

  • Yin X, Liu X (2017) Multi-task convolutional neural network for pose-invariant face recognition. IEEE Trans Image Process 27:964–975

    Article  MathSciNet  ADS  Google Scholar 

  • Zhou X, Gao X, Wang J, Yu H, Wang Z, Chi ZJPR (2017) Eye tracking data guided feature selection for image classification. Pattern Recogn 63:56–70

    Article  ADS  Google Scholar 

Download references

Acknowledgements

We would like to acknowledge the grant support from the Taiwan Ministry of Science and Technology (MOST107-2221-E-155-033-MY3) for funding the paper submission.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ray F. Lin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Destyanto, T.Y.R., Lin, R.F. Detecting computer activities using eye-movement features. J Ambient Intell Human Comput 14, 14441–14451 (2023). https://doi.org/10.1007/s12652-020-02683-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-020-02683-8

Keywords

Navigation