Abstract
We developed a game with a purpose (GWAP), which collects structured data corresponding to adjectives to build a visual effect dictionary. In this system, new semantic links can be acquired. Under the guise of a fighting game, the system encourages the user to vote on the commonsense knowledge associated with an object, because our previous research indicated that the rules of showing appropriate visual effect according to the adjective is related to commonsense knowledge of the target object. This system displays visual effects on the target object and the data structure is updated based on user’s vote. This structured data underlies a new type of communication support system that continuously improves visual effects that modify adjectives and objects. In this paper, we discuss the structure of the visual effect dictionary through an experiment. Findings show that GWAP effectively improves the relationship between commonsense knowledge and objects, while creating new linkages via deduction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hirai, S., Sumi, K.: Visual-effect dictionary for converting words into visual images. In: Munekata, N., Kunita, I., Hoshino, J. (eds.) ICEC 2017. LNCS, vol. 10507, pp. 177–182. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66715-7_18
Joshi, D., Wang, J.Z., Li, J.: The story picturing engine: finding elite images to illustrate a story using mutual reinforcement. In: Proceedings of the 6th ACM SIGMM International Workshop on Multimedia Information Retrieval, pp. 119–126 (2004)
Joshi, D., Wang, J.Z., Li, J.: The story picturing engine—a system for automatic text illustration. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2(1), 68–89 (2006). https://doi.org/10.1145/1126004.1126008
Zhu, X., Goldberg, A.B., Eldawy, M., Dyer, C.R., Strock, B.: A text-to-picture synthesis system for augmenting communication. In: AAAI 2007, vol. 7, pp. 1590–1595 (2007)
Adorni, G., Manzo, M.D., Ferrari, G.: Natural language input for scene generation. In: Proceedings of the First Conference on European Chapter of the Association for Computational Linguistics, pp. 175–182 (1983)
Clay, S.R., Wilhelms, J.: Put: language-based interactive manipulation of objects. IEEE Comput. Graphics 16(2), 31–39 (1996). https://doi.org/10.1109/38.486678
Miller, G.: WordNet: An Electronic Lexical Database. Edited by C. Fellbaum. A Bradford Book, Cambridge (1998)
Spika, C., Schwarz, K., Dammertz, H., Lensch, H.P.A.: AVDT-automatic visualization of descriptive texts. In: VMV, pp. 129–136 (2011)
Chang, A., Savva, M., Manning, C.: Interactive learning of spatial knowledge for text to 3D scene generation. In: Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces, pp. 14–21 (2014)
Chang, A., Savva, M., Manning, C.: Semantic parsing for text to 3D scene generation. In: Proceedings of the ACL 2014 Workshop on Semantic Parsing, pp. 17–21 (2014)
Chang, A., Savva, M., Manning, C.D.: Learning spatial knowledge for text to 3D scene generation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2028–2038 (2014)
Winograd, T.: Understanding Natural Language. Academic Press, Cambridge (1972)
Tanaka, H., Tokunaga, T., Shinyama, Y.: Animated agents capable of understanding natural language and performing actions. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters. COGTECH, pp. 429–443. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-662-08373-4_18
Sumi, K.: Anime de blog: animation CGM for content distribution. In: Proceedings of International Conference on Advances in Computer Entertainment Technology (ACE2008), pp. 187–190 (2008)
Sumi, K.: Animation-based interactive storytelling system. In: Spierling, U., Szilas, N. (eds.) ICIDS 2008. LNCS, vol. 5334, pp. 48–50. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89454-4_8
Sumi, K.: Capturing common sense knowledge via story generation. In: Common Sense and Intelligent User Interfaces 2009: Story Understanding and Generation for Context-Aware Interface Design, 2009 International Conference on Intelligent User Interfaces (IUI2009), SIGCHI ACM, February 2009
Singh, P., Lin, T., Mueller, E.T., Lim, G., Perkins, T., Li Zhu, W.: Open mind common sense: knowledge acquisition from the general public. In: Meersman, R., Tari, Z. (eds.) OTM 2002. LNCS, vol. 2519, pp. 1223–1237. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-36124-3_77
Speer, R., Havasi, C.: Representing general relational knowledge in ConceptNet 5. In: LREC, pp. 3679–3686 (2012)
OMCS: Japanese Open Mind Common Sense (2010). http://omcs.jp. Accessed 14 Feb 2018. (in Japanese)
Princeton University: About WordNet. https://wordnet.princeton.edu. Accessed 19 Dec 2017
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Hirai, S., Sumi, K. (2019). Collecting Visual Effect Linked Data Using GWAP. In: El Rhalibi, A., Pan, Z., Jin, H., Ding, D., Navarro-Newball, A., Wang, Y. (eds) E-Learning and Games. Edutainment 2018. Lecture Notes in Computer Science(), vol 11462. Springer, Cham. https://doi.org/10.1007/978-3-030-23712-7_39
Download citation
DOI: https://doi.org/10.1007/978-3-030-23712-7_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-23711-0
Online ISBN: 978-3-030-23712-7
eBook Packages: Computer ScienceComputer Science (R0)