Live Demonstrator of EEG and Eye-Tracking Input for Disambiguation of Image Search Results

  • Jan-Eike Golenia
  • Markus Wenzel
  • Benjamin Blankertz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9359)

Abstract

When searching images on the web, users are often confronted with irrelevant results due to ambiguous queries. Consider a search term like ’Bill’: Results will probably consist of multiple images depicting Bill Clinton, Bill Cosby and money bills. Given that the user is only interested in pictures of money bills, most of the results are irrelevant. We built a demo application that exploits EEG and eye-tracking data for the disambiguation of one of two possible interpretations of an ambiguous search term. The demo exhibits the integration of sensor input into a modern web application.

Keywords

Human-computer interaction Brain-computer interface EEG Eye-tracking Information retrieval Free viewing ERP Multivariate decoding 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Blankertz, B., Lemm, S., Treder, M., Haufe, S., Mller, K.-R.: Single-trial analysis and classification of ERP components a tutorial. NeuroImage 56, 814–825 (2011)CrossRefGoogle Scholar
  2. 2.
    Friedman, J.H.: Regularized discriminant analysis. J. Amer. Statist. Assoc. 84, 165–175 (1989)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Gerson, A.D., Parra, L.D., Sajda, P.: Cortically Coupled Computer-Vision for Rapid Image Search. Neural Systems and Rehabilitation Engineering 14, 174–179 (2006)CrossRefGoogle Scholar
  4. 4.
    Ledoit, O., Wolf, M.: A well-conditioned estimator for large-dimensional covariance matrices. J. of Multivariate Analysis 88, 365–411 (2004)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Hardoon, D.R., Pasupa, K.: Image ranking with implicit feedback from eye movements. In: Proceedings of the 6th Biennial Symposium on Eye Tracking Research & Applications (ETRA), pp. 291–298. ACM, New York (2010)Google Scholar
  6. 6.
    Hajimirza, S.N., Proulx, M.J., Izquierdo, E.: Reading users’ minds from their eyes: a method for implicit image annotation. IEEE Transactions on Multimedia, 805–815 (2012). IEEE press, New YorkGoogle Scholar
  7. 7.
    Ušćumlić, M., Chavarriaga, R., Milln, J.R.: An Iterative Framework for EEG-based Image Search: Robust Retrieval with Weak Classifiers. PLoS ONE 8, e72018 (2013)CrossRefGoogle Scholar
  8. 8.
    Kauppi, J.-P., Kandemir, M., Saarinen, V.-M., Hirvenkari, L., Parkkonen, L., Klami, A., Hari, R., Kaski, S.: Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals. NeuroImage 112, 288–298 (2015)CrossRefGoogle Scholar
  9. 9.
    Jangraw, D.C., Wang, J., Lance, B.J., Chang, S.-F., Sajda, P.: Neurally and ocularly informed graph-based models for searching 3D environments. J. of Neural Engineering 11, 046003 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Jan-Eike Golenia
    • 1
  • Markus Wenzel
    • 1
  • Benjamin Blankertz
    • 1
  1. 1.Neurotechnology GroupTechnische Universität BerlinBerlinGermany

Personalised recommendations