3D Saliency from Eye Tracking with Tomography

  • Bo MaEmail author
  • Eakta Jain
  • Alireza Entezari
Conference paper
Part of the Mathematics and Visualization book series (MATHVISUAL)


This paper presents a method to build a saliency map in a volumetric dataset using 3D eye tracking. Our approach acquires the saliency information from multiple views of a 3D dataset with an eye tracker and constructs the 3D saliency volume from the gathered 2D saliency information using a tomographic reconstruction algorithm. Our experiments, on a number of datasets, show the effectiveness of our approach in identifying salient 3D features that attract user’s attention. The obtained 3D saliency volume provides importance information and can be used in various applications such as illustrative visualization.


Filter Back Projection Saliency Region Back Projection Saliency Information Direct Volume Rendering 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This work was supported in part by the Office of Naval Research (N00014-16-1-2228) and US National Science Foundation (NSF IIS-1617101). The datasets are courtesy of the volvis community and OsiriX Foundation.


  1. 1.
    Bruckner, S., Gröller, E.: Style transfer functions for illustrative volume rendering. Comput. Graph. Forum 26 (3), 715–724 (2007). doi:10.1111/j.1467-8659.2007.01095.x. CrossRefGoogle Scholar
  2. 2.
    Burch, M., Konevtsova, N., Heinrich, J., Hoeferlin, M., Weiskopf, D.: Evaluation of traditional, orthogonal, and radial tree diagrams by an eye tracking study. IEEE Trans. Visual. Comput. Graph. 17 (12), 2440–2448 (2011). doi:10.1109/TVCG.2011.193CrossRefGoogle Scholar
  3. 3.
    Duchowski, A.: Eye Tracking Methodology: Theory and Practice, vol. 373. Springer Science & Business Media, London (2007)Google Scholar
  4. 4.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20 (11), 1254–1259 (1998). doi:10.1109/34.730558CrossRefGoogle Scholar
  5. 5.
    Jain, E., Sheikh, Y., Hodgins, J.: Inferring artistic intention in comic art through viewer gaze. In: Proceedings of the ACM Symposium on Applied Perception, SAP’12, pp. 55–62. ACM, New York (2012). doi:10.1145/2338676.2338688.
  6. 6.
    Jain, E., Sheikh, Y., Shamir, A., Hodgins, J.: Gaze-driven video re-editing. ACM Trans. Graph. 34 (2), 21:1–21:12 (2015). doi:10.1145/2699644.
  7. 7.
    Kim, Y., Varshney, A.: Saliency-guided enhancement for volume visualization. IEEE Trans. Visual. Comput. Graph. 12 (5), 925–932 (2006). doi:10.1109/TVCG.2006.174CrossRefGoogle Scholar
  8. 8.
    Kim, Y., Varshney, A., Jacobs, D.W., Guimbretière, F.: Mesh saliency and human eye fixations. ACM Trans. Appl. Percept. 7 (2), 12:1–12:13 (2010). doi:10.1145/1670671.1670676.
  9. 9.
    Lee, C.H., Varshney, A., Jacobs, D.W.: Mesh saliency. ACM Trans. Graph. 24 (3), 659–666 (2005). doi:10.1145/1073204.1073244. CrossRefGoogle Scholar
  10. 10.
    Lu, A., Maciejewski, R., Ebert, D.S.: Volume composition and evaluation using eye-tracking data. ACM Trans. Appl. Percept. 7 (1), 4:1–4:20 (2010). doi:10.1145/1658349.1658353.
  11. 11.
    Machiraju, R., Fowler, J.E., Thompson, D., Soni, B., Schroeder, W.: Evita-efficient visualization and interrogation of tera-scale data. In: Data Mining for Scientific and Engineering Applications, pp. 257–279. Springer, Boston (2001)Google Scholar
  12. 12.
    Mantiuk, R., Bazyluk, B., Mantiuk, R.K.: Gaze-driven object tracking for real time rendering. Comput. Graph. Forum 32 (2), 163–173 (2013). doi:10.1111/cgf.12036. CrossRefGoogle Scholar
  13. 13.
    Paletta, L., Santner, K., Fritz, G., Mayer, H., Schrammel, J.: 3D attention: measurement of visual saliency using eye tracking glasses. In: CHI’13 Extended Abstracts on Human Factors in Computing Systems, pp. 199–204. ACM (2013)Google Scholar
  14. 14.
    Pfeiffer, T.: Measuring and visualizing attention in space with 3D attention volumes. In: Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA’12, pp. 29–36. ACM, New York (2012). doi:10.1145/2168556.2168560.
  15. 15.
    Pfeiffer, T., Renner, P.: Eyesee3D: a low-cost approach for analyzing mobile 3D eye tracking data using computer vision and augmented reality technology. In: Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA’14, pp. 195–202. ACM, New York (2014). doi:10.1145/2578153.2578183.
  16. 16.
    Santella, A., DeCarlo, D.: Abstracted painterly renderings using eye-tracking data. In: Proceedings of the 2nd International Symposium on Non-Photorealistic Animation and Rendering, NPAR’02, pp. 75–ff. ACM, New York (2002). doi:10.1145/508530.508544.
  17. 17.
    Smith, S.W.: The Scientist and Engineer’s Guide to Digital Signal Processing. California Technical Pub., San Diego (1997)Google Scholar
  18. 18.
    Suter, S., Ma, B., Entezari, A.: Visual analysis of 3D data by isovalue clustering. In: Advances in Visual Computing. Lecture Notes in Computer Science, vol. 8887, pp. 313–322. Springer International Publishing (2014). doi:10.1007/978-3-319-14249-4-30.
  19. 19.
    Viola, I., Kanitsar, A., Groller, M.E.: Importance-driven volume rendering. In: Proceedings of the Conference on Visualization’04, pp. 139–146. IEEE Computer Society (2004)Google Scholar
  20. 20.
    Zhou, J., Takatsuka, M.: Automatic transfer function generation using contour tree controlled residue flow model and color harmonics. IEEE Trans. Visual. Comput. Graph. 15 (6), 1481–1488 (2009). doi:10.1109/TVCG.2009.120CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.University of FloridaGainesvilleUSA

Personalised recommendations