Skip to main content
Log in

SPOID: a system to produce spot-the-difference puzzle images with difficulty

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Spot-the-difference is a type of puzzles, where users try to find the different parts of two perceptually similar but actually different images. We propose a semi-automatic system to produce various spot-the-difference puzzle images tagged with their difficulties from a single input image. First, we extract regions to modify from the input by our modified maximal similarity-based region merging algorithm with little user intervention and then apply a variety of image editing techniques for each region to create a modified image. We evaluate the difficulty of a pair of the input and the modified images by considering the saliency and the perceptual difference of the modified region. We provide an empirical model to estimate the time to solve a pair of the images with respect to its difficulty. We show our experimental results and quantitative user research results to evaluate the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002). doi:10.1109/34.993558

    Article  Google Scholar 

  2. Biederman, I.: Perceiving real-world scenes. Science 177(43), 77–80 (1972)

    Article  Google Scholar 

  3. Blok, H.: The nature of the stock market: simulations and experiments. Ph.D. thesis, University of British, Columbia (2000)

  4. de Brecht, M., Saiki, J.: A neural network implementation of a saliency map model. Neural Netw. 19(10), 1467–1474 (2006). doi:10.1016/j.neunet.2005.12.004

    Article  MATH  Google Scholar 

  5. Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13, 1200–1212 (2004). doi:10.1109/TIP.2004.833105

    Article  Google Scholar 

  6. Daly, S.: The visible differences predictor: an algorithm for the assessment of image fidelity. In: Watson, A.B. (ed.) Digital Images and Human Vision, pp. 179–206. MIT Press, Cambridge (1993)

    Google Scholar 

  7. Foulsham, T., Underwood, G.: How does the purpose of inspection influence the potency of visual salience in scene perception? Perception 36(8), 1123–1138 (2007)

    Article  Google Scholar 

  8. Henderson, J.M., Brockmole, J.R., Castelhano, M.S., Mack, M.: Visual saliency does not account for eye movements during visual search in real-world scenes. In: van Gompel, R., Fischer, M., Murray, W., Hill, R. (eds.) Eye Movement Research: Insights into Mind and Brain. Elsevier, Amsterdam (2006)

    Google Scholar 

  9. Huang, L., Pashler, H.: A Boolean map theory of visual attention. Psychol. Rev. 114, 599–631 (2007). doi:10.1037/0033-295X.114.3.599

    Article  Google Scholar 

  10. Huang, L., Treisman, A., Pashler, H.: Characterizing the limits of human visual awareness. Science 317(5839), 823–825 (2007). doi:10.1126/science.1143515

    Article  Google Scholar 

  11. Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000)

    Article  Google Scholar 

  12. Kelley, T.A., Chun, M.M., Chua, K.P.: Effects of scene inversion on change detection of targets matched for visual salience. J. Vis. 3(1), 1–5 (2003)

    Article  Google Scholar 

  13. Liu, J., Sun, J., Shum, H.Y.: Paint selection. ACM Trans. Graph. 28, 69:1–69:7 (2009). doi:10.1145/1531326.1531375

    Google Scholar 

  14. Liu, S., Chen, Q., Dong, J., Yan, S., Xu, C., Lu, H.: Snap & play: auto-generate personalized find-the-difference mobile game. In: Proceedings of the 19th ACM International Conference on Multimedia, MM ’11, pp. 993–996. ACM, New York (2011). doi:10.1145/2072298.2071921

    Chapter  Google Scholar 

  15. Lu, F., Fu, Z., Robles-Kelly, A.: Efficient graph cuts for multiclass interactive image segmentation. In: Proceedings of the 8th Asian Conference on Computer Vision—Volume Part II, ACCV’07, pp. 134–144. Springer, Berlin, Heidelberg (2007). doi:10.1007/978-3-540-76390-1

    Google Scholar 

  16. Lubin, J.: A visual discrimination model for imaging system design and evaluation. In: Peli, E. (ed.) Vision Models for Target Detection and Recognition: In Memory of Arthur Menendez, pp. 245–283. World Scientific, Singapore (1995)

    Chapter  Google Scholar 

  17. Ning, J., Zhang, L., Zhang, D., Wu, C.: Interactive image segmentation by maximal similarity based region merging. Pattern Recognit. 43, 445–456 (2010). doi:10.1016/j.patcog.2009.03.004

    Article  MATH  Google Scholar 

  18. Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vis. Res. 42(1), 107–123 (2002)

    Article  Google Scholar 

  19. Ramasubramanian, M., Pattanaik, S.N., Greenberg, D.P.: A perceptually based physical error metric for realistic image synthesis. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’99, pp. 73–82. ACM Press/Addison-Wesley, New York (1999). doi:10.1145/311535.311543

    Chapter  Google Scholar 

  20. Rensink, R.A., O’Regan, J.K., Clark, J.J.: To see or not to see: the need for attention to perceive changes in scenes. Psychol. Sci. 8, 368–373 (1997)

    Article  Google Scholar 

  21. Rother, C., Kolmogorov, V., Blake, A.: GrabCut: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23, 309–314 (2004). doi:10.1145/1015706.1015720

    Article  Google Scholar 

  22. RotSprite: (2007). http://info.sonicretro.org/RotSprite

  23. Smith, C.N., Hopkins, R.O., Squire, L.R.: Experience-dependent eye movements, awareness, and hippocampus-dependent memory. J. Neurosci. 26(44), 11304–11312 (2006). doi:10.1523/JNEUROSCI.3071

    Article  Google Scholar 

  24. Stirk, J.A., Underwood, G.: Low-level visual saliency does not predict change detection in natural scenes. J. Vis. (2007). doi:10.1167/7.10.3

    Google Scholar 

  25. Torralba, A., Oliva, A., Castelhano, M.S., Henderson, J.M.: Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113(4), 766–786 (2006). doi:10.1037/0033-295X.113.4.766

    Article  Google Scholar 

  26. Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12, 97–136 (1980). doi:10.1016/0010-0285(80)90005-5

    Article  Google Scholar 

  27. Treue, S.: Visual attention: the where, what, how and why of saliency. Curr. Opin. Neurobiol. 13(4), 428–432 (2003)

    Article  Google Scholar 

  28. Triesch, J., Ballard, D.H., Hayhoe, M.M., Sullivan, B.T.: What you see is what you need. J. Vis. 3, 86–94 (2003)

    Article  Google Scholar 

  29. Underwood, G., Foulsham, T., Van Loon, E., Humphreys, L.: Eye movements during scene inspection: a test of the saliency map hypothesis. Eur. J. Cogn. Psychol. 18(3), 321–342 (2006)

    Article  Google Scholar 

  30. Verma, M., McOwan, P.W.: Generating customised experimental stimuli for visual search using genetic algorithms shows evidence for a continuum of search efficiency. Vis. Res. 49(3), 374–382 (2009). doi:10.1016/j.visres.2008.11.006

    Article  Google Scholar 

  31. Verma, M., McOwan, P.W.: A semi-automated approach to balancing of bottom-up salience for predicting change detection performance. J. Vis. 10(6), 3 (2010)

    Article  Google Scholar 

  32. Yee, Y.H., Newman, A.: A perceptual metric for production testing. In: ACM SIGGRAPH 2004 Sketches, SIGGRAPH ’04, p. 121. ACM, New York (2004). doi:10.1145/1186223.1186374

    Chapter  Google Scholar 

  33. Yoon, J.C., Lee, I.K., Kang, H.: A hidden-picture puzzles generator. Comput. Graph. Forum 27(7), 1869–1877 (2008). doi:10.1111/j.1467-8659.2008.01334.x

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful to Neowiz Games for their valuable assistance to our experiments. This research is partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MEST) (No. 2012R1A1A2007264) and partially supported by the Industry Strategic Technology Development Program (No. 10041784) funded by the Ministry of Knowledge Economy (MKE, Korea).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jung-Ju Choi.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jin, JH., Shin, H.J. & Choi, JJ. SPOID: a system to produce spot-the-difference puzzle images with difficulty. Vis Comput 29, 481–489 (2013). https://doi.org/10.1007/s00371-013-0812-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-013-0812-6

Keywords

Navigation