Abstract
Spot-the-difference is a type of puzzles, where users try to find the different parts of two perceptually similar but actually different images. We propose a semi-automatic system to produce various spot-the-difference puzzle images tagged with their difficulties from a single input image. First, we extract regions to modify from the input by our modified maximal similarity-based region merging algorithm with little user intervention and then apply a variety of image editing techniques for each region to create a modified image. We evaluate the difficulty of a pair of the input and the modified images by considering the saliency and the perceptual difference of the modified region. We provide an empirical model to estimate the time to solve a pair of the images with respect to its difficulty. We show our experimental results and quantitative user research results to evaluate the effectiveness of the proposed method.
Similar content being viewed by others
References
Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002). doi:10.1109/34.993558
Biederman, I.: Perceiving real-world scenes. Science 177(43), 77–80 (1972)
Blok, H.: The nature of the stock market: simulations and experiments. Ph.D. thesis, University of British, Columbia (2000)
de Brecht, M., Saiki, J.: A neural network implementation of a saliency map model. Neural Netw. 19(10), 1467–1474 (2006). doi:10.1016/j.neunet.2005.12.004
Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13, 1200–1212 (2004). doi:10.1109/TIP.2004.833105
Daly, S.: The visible differences predictor: an algorithm for the assessment of image fidelity. In: Watson, A.B. (ed.) Digital Images and Human Vision, pp. 179–206. MIT Press, Cambridge (1993)
Foulsham, T., Underwood, G.: How does the purpose of inspection influence the potency of visual salience in scene perception? Perception 36(8), 1123–1138 (2007)
Henderson, J.M., Brockmole, J.R., Castelhano, M.S., Mack, M.: Visual saliency does not account for eye movements during visual search in real-world scenes. In: van Gompel, R., Fischer, M., Murray, W., Hill, R. (eds.) Eye Movement Research: Insights into Mind and Brain. Elsevier, Amsterdam (2006)
Huang, L., Pashler, H.: A Boolean map theory of visual attention. Psychol. Rev. 114, 599–631 (2007). doi:10.1037/0033-295X.114.3.599
Huang, L., Treisman, A., Pashler, H.: Characterizing the limits of human visual awareness. Science 317(5839), 823–825 (2007). doi:10.1126/science.1143515
Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000)
Kelley, T.A., Chun, M.M., Chua, K.P.: Effects of scene inversion on change detection of targets matched for visual salience. J. Vis. 3(1), 1–5 (2003)
Liu, J., Sun, J., Shum, H.Y.: Paint selection. ACM Trans. Graph. 28, 69:1–69:7 (2009). doi:10.1145/1531326.1531375
Liu, S., Chen, Q., Dong, J., Yan, S., Xu, C., Lu, H.: Snap & play: auto-generate personalized find-the-difference mobile game. In: Proceedings of the 19th ACM International Conference on Multimedia, MM ’11, pp. 993–996. ACM, New York (2011). doi:10.1145/2072298.2071921
Lu, F., Fu, Z., Robles-Kelly, A.: Efficient graph cuts for multiclass interactive image segmentation. In: Proceedings of the 8th Asian Conference on Computer Vision—Volume Part II, ACCV’07, pp. 134–144. Springer, Berlin, Heidelberg (2007). doi:10.1007/978-3-540-76390-1
Lubin, J.: A visual discrimination model for imaging system design and evaluation. In: Peli, E. (ed.) Vision Models for Target Detection and Recognition: In Memory of Arthur Menendez, pp. 245–283. World Scientific, Singapore (1995)
Ning, J., Zhang, L., Zhang, D., Wu, C.: Interactive image segmentation by maximal similarity based region merging. Pattern Recognit. 43, 445–456 (2010). doi:10.1016/j.patcog.2009.03.004
Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vis. Res. 42(1), 107–123 (2002)
Ramasubramanian, M., Pattanaik, S.N., Greenberg, D.P.: A perceptually based physical error metric for realistic image synthesis. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’99, pp. 73–82. ACM Press/Addison-Wesley, New York (1999). doi:10.1145/311535.311543
Rensink, R.A., O’Regan, J.K., Clark, J.J.: To see or not to see: the need for attention to perceive changes in scenes. Psychol. Sci. 8, 368–373 (1997)
Rother, C., Kolmogorov, V., Blake, A.: GrabCut: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23, 309–314 (2004). doi:10.1145/1015706.1015720
RotSprite: (2007). http://info.sonicretro.org/RotSprite
Smith, C.N., Hopkins, R.O., Squire, L.R.: Experience-dependent eye movements, awareness, and hippocampus-dependent memory. J. Neurosci. 26(44), 11304–11312 (2006). doi:10.1523/JNEUROSCI.3071
Stirk, J.A., Underwood, G.: Low-level visual saliency does not predict change detection in natural scenes. J. Vis. (2007). doi:10.1167/7.10.3
Torralba, A., Oliva, A., Castelhano, M.S., Henderson, J.M.: Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113(4), 766–786 (2006). doi:10.1037/0033-295X.113.4.766
Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12, 97–136 (1980). doi:10.1016/0010-0285(80)90005-5
Treue, S.: Visual attention: the where, what, how and why of saliency. Curr. Opin. Neurobiol. 13(4), 428–432 (2003)
Triesch, J., Ballard, D.H., Hayhoe, M.M., Sullivan, B.T.: What you see is what you need. J. Vis. 3, 86–94 (2003)
Underwood, G., Foulsham, T., Van Loon, E., Humphreys, L.: Eye movements during scene inspection: a test of the saliency map hypothesis. Eur. J. Cogn. Psychol. 18(3), 321–342 (2006)
Verma, M., McOwan, P.W.: Generating customised experimental stimuli for visual search using genetic algorithms shows evidence for a continuum of search efficiency. Vis. Res. 49(3), 374–382 (2009). doi:10.1016/j.visres.2008.11.006
Verma, M., McOwan, P.W.: A semi-automated approach to balancing of bottom-up salience for predicting change detection performance. J. Vis. 10(6), 3 (2010)
Yee, Y.H., Newman, A.: A perceptual metric for production testing. In: ACM SIGGRAPH 2004 Sketches, SIGGRAPH ’04, p. 121. ACM, New York (2004). doi:10.1145/1186223.1186374
Yoon, J.C., Lee, I.K., Kang, H.: A hidden-picture puzzles generator. Comput. Graph. Forum 27(7), 1869–1877 (2008). doi:10.1111/j.1467-8659.2008.01334.x
Acknowledgements
We are grateful to Neowiz Games for their valuable assistance to our experiments. This research is partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MEST) (No. 2012R1A1A2007264) and partially supported by the Industry Strategic Technology Development Program (No. 10041784) funded by the Ministry of Knowledge Economy (MKE, Korea).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Jin, JH., Shin, H.J. & Choi, JJ. SPOID: a system to produce spot-the-difference puzzle images with difficulty. Vis Comput 29, 481–489 (2013). https://doi.org/10.1007/s00371-013-0812-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-013-0812-6