Skip to main content

Collecting Visual Effect Linked Data Using GWAP

  • Conference paper
  • First Online:
E-Learning and Games (Edutainment 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11462))

Included in the following conference series:

  • 1238 Accesses

Abstract

We developed a game with a purpose (GWAP), which collects structured data corresponding to adjectives to build a visual effect dictionary. In this system, new semantic links can be acquired. Under the guise of a fighting game, the system encourages the user to vote on the commonsense knowledge associated with an object, because our previous research indicated that the rules of showing appropriate visual effect according to the adjective is related to commonsense knowledge of the target object. This system displays visual effects on the target object and the data structure is updated based on user’s vote. This structured data underlies a new type of communication support system that continuously improves visual effects that modify adjectives and objects. In this paper, we discuss the structure of the visual effect dictionary through an experiment. Findings show that GWAP effectively improves the relationship between commonsense knowledge and objects, while creating new linkages via deduction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hirai, S., Sumi, K.: Visual-effect dictionary for converting words into visual images. In: Munekata, N., Kunita, I., Hoshino, J. (eds.) ICEC 2017. LNCS, vol. 10507, pp. 177–182. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66715-7_18

    Chapter  Google Scholar 

  2. Joshi, D., Wang, J.Z., Li, J.: The story picturing engine: finding elite images to illustrate a story using mutual reinforcement. In: Proceedings of the 6th ACM SIGMM International Workshop on Multimedia Information Retrieval, pp. 119–126 (2004)

    Google Scholar 

  3. Joshi, D., Wang, J.Z., Li, J.: The story picturing engine—a system for automatic text illustration. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2(1), 68–89 (2006). https://doi.org/10.1145/1126004.1126008

    Article  Google Scholar 

  4. Zhu, X., Goldberg, A.B., Eldawy, M., Dyer, C.R., Strock, B.: A text-to-picture synthesis system for augmenting communication. In: AAAI 2007, vol. 7, pp. 1590–1595 (2007)

    Google Scholar 

  5. Adorni, G., Manzo, M.D., Ferrari, G.: Natural language input for scene generation. In: Proceedings of the First Conference on European Chapter of the Association for Computational Linguistics, pp. 175–182 (1983)

    Google Scholar 

  6. Clay, S.R., Wilhelms, J.: Put: language-based interactive manipulation of objects. IEEE Comput. Graphics 16(2), 31–39 (1996). https://doi.org/10.1109/38.486678

    Article  Google Scholar 

  7. Miller, G.: WordNet: An Electronic Lexical Database. Edited by C. Fellbaum. A Bradford Book, Cambridge (1998)

    Google Scholar 

  8. Spika, C., Schwarz, K., Dammertz, H., Lensch, H.P.A.: AVDT-automatic visualization of descriptive texts. In: VMV, pp. 129–136 (2011)

    Google Scholar 

  9. Chang, A., Savva, M., Manning, C.: Interactive learning of spatial knowledge for text to 3D scene generation. In: Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces, pp. 14–21 (2014)

    Google Scholar 

  10. Chang, A., Savva, M., Manning, C.: Semantic parsing for text to 3D scene generation. In: Proceedings of the ACL 2014 Workshop on Semantic Parsing, pp. 17–21 (2014)

    Google Scholar 

  11. Chang, A., Savva, M., Manning, C.D.: Learning spatial knowledge for text to 3D scene generation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2028–2038 (2014)

    Google Scholar 

  12. Winograd, T.: Understanding Natural Language. Academic Press, Cambridge (1972)

    Book  Google Scholar 

  13. Tanaka, H., Tokunaga, T., Shinyama, Y.: Animated agents capable of understanding natural language and performing actions. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters. COGTECH, pp. 429–443. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-662-08373-4_18

    Chapter  Google Scholar 

  14. Sumi, K.: Anime de blog: animation CGM for content distribution. In: Proceedings of International Conference on Advances in Computer Entertainment Technology (ACE2008), pp. 187–190 (2008)

    Google Scholar 

  15. Sumi, K.: Animation-based interactive storytelling system. In: Spierling, U., Szilas, N. (eds.) ICIDS 2008. LNCS, vol. 5334, pp. 48–50. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89454-4_8

    Chapter  Google Scholar 

  16. Sumi, K.: Capturing common sense knowledge via story generation. In: Common Sense and Intelligent User Interfaces 2009: Story Understanding and Generation for Context-Aware Interface Design, 2009 International Conference on Intelligent User Interfaces (IUI2009), SIGCHI ACM, February 2009

    Google Scholar 

  17. Singh, P., Lin, T., Mueller, E.T., Lim, G., Perkins, T., Li Zhu, W.: Open mind common sense: knowledge acquisition from the general public. In: Meersman, R., Tari, Z. (eds.) OTM 2002. LNCS, vol. 2519, pp. 1223–1237. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-36124-3_77

    Chapter  Google Scholar 

  18. Speer, R., Havasi, C.: Representing general relational knowledge in ConceptNet 5. In: LREC, pp. 3679–3686 (2012)

    Google Scholar 

  19. OMCS: Japanese Open Mind Common Sense (2010). http://omcs.jp. Accessed 14 Feb 2018. (in Japanese)

  20. Princeton University: About WordNet. https://wordnet.princeton.edu. Accessed 19 Dec 2017

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaoru Sumi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hirai, S., Sumi, K. (2019). Collecting Visual Effect Linked Data Using GWAP. In: El Rhalibi, A., Pan, Z., Jin, H., Ding, D., Navarro-Newball, A., Wang, Y. (eds) E-Learning and Games. Edutainment 2018. Lecture Notes in Computer Science(), vol 11462. Springer, Cham. https://doi.org/10.1007/978-3-030-23712-7_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23712-7_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23711-0

  • Online ISBN: 978-3-030-23712-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics