Cross-media analysis and reasoning: advances and directions

  • Yu-xin Peng
  • Wen-wu Zhu
  • Yao Zhao
  • Chang-sheng Xu
  • Qing-ming Huang
  • Han-qing Lu
  • Qing-hua Zheng
  • Tie-jun Huang
  • Wen Gao
Review

Abstract

Cross-media analysis and reasoning is an active research area in computer science, and a promising direction for artificial intelligence. However, to the best of our knowledge, no existing work has summarized the state-of-the-art methods for cross-media analysis and reasoning or presented advances, challenges, and future directions for the field. To address these issues, we provide an overview as follows: (1) theory and model for cross-media uniform representation; (2) cross-media correlation understanding and deep mining; (3) cross-media knowledge graph construction and learning methodologies; (4) cross-media knowledge evolution and reasoning; (5) cross-media description and generation; (6) cross-media intelligent engines; and (7) cross-media intelligent applications. By presenting approaches, advances, and future directions in cross-media analysis and reasoning, our goal is not only to draw more attention to the state-of-the-art advances in the field, but also to provide technical insights by discussing the challenges and research directions in these areas.

Key words

Cross-media analysis Cross-media reasoning Cross-media applications 

CLC number

TP391 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

The authors would like to thank Peng CUI, Shi-kui WEI, Ji-tao SANG, Shu-hui WANG, Jing LIU, and Bu-yue QIAN for their valuable discussions and assistance.

References

  1. Aamodt, A., Plaza, E., 1994. Case-based reasoning: foundational issues, methodological variations, and system approaches. AI Commun., 7(1)):39–59. http://dx.doi.org/10.3233/AIC-1994-7104 Google Scholar
  2. Adib, F., Hsu, C.Y., Mao, H., et al., 2015. Capturing the human figure through a wall. ACM Trans. Graph., 34(6)):219. http://dx.doi.org/10.1145/2816795.2818072 CrossRefGoogle Scholar
  3. Andrew, G., Arora, R., Bilmes, J., et al., 2013. Deep canonical correlation analysis. Int. Conf. on Machine Learning, p.1247–1255.Google Scholar
  4. Antenucci, D., Li, E., Liu, S., et al., 2013. Ringtail: a generalized nowcasting system. Proc. VLDB Endow., 6(12)):1358–1361. http://dx.doi.org/10.14778/2536274.2536315 CrossRefGoogle Scholar
  5. Antol, S., Agrawal, A., Lu, J., et al., 2015. VQA: visual question answering. IEEE Int. Conf. on Computer Vision, p.2425–2433. http://dx.doi.org/10.1109/ICCV.2015.279 Google Scholar
  6. Babenko, A., Slesarev, A., Chigorin, A., et al., 2014. Neural codes for image retrieval. European Conf. on Computer Vision, p.584–599. http://dx.doi.org/10.1007/978-3-319-10590-1_38 Google Scholar
  7. Brownson, R.C., Gurney, J.G., Land, G.H., 1999. Evidence-based decision making in public health. J. Publ. Health Manag. Pract., 5(5)):86–97. http://dx.doi.org/10.1097/00124784-199909000-00012 CrossRefGoogle Scholar
  8. Carlson, C., Betteridge, J., Kisiel, B., et al., 2010. Towards an architecture for never-ending language learning. AAAI Conf. on Artificial Intelligence, p.1306–1313.Google Scholar
  9. Chen, D.P., Weber, S.C., Constantinou, P.S., et al., 2007. Clinical arrays of laboratory measures, or “clinarrays”, built from an electronic health record enable disease subtyping by severity. AMIA Annual Symp. Proc., p.115–119.Google Scholar
  10. Chen, X., Shrivastava, A., Gupta, A., 2013. NEIL: extracting visual knowledge from web data. IEEE Int. Conf. on Computer Vision, p.1409–1416. http://dx.doi.org/10.1109/ICCV.2013.178 Google Scholar
  11. Chen, Y., Carroll, R.J., Hinz, E.R.M., et al., 2013. Applying active learning to high-throughput phenotyping algorithms for electronic health records data. J. Am. Med. Inform. Assoc., 20(e2)):253–259. http://dx.doi.org/10.1136/amiajnl-2013-001945 CrossRefGoogle Scholar
  12. Cilibrasi, R.L., Vitanyi, P.M.B., 2007. The Google similarity distance. IEEE Trans. Knowl. Data Eng., 19(3)):370–383. http://dx.doi.org/10.1109/TKDE.2007.48 CrossRefGoogle Scholar
  13. Culotta, A., 2014. Estimating county health statistics with twitter. ACM Conf. on Human Factors in Computing Systems, p.1335–1344. http://dx.doi.org/10.1145/2556288.2557139 Google Scholar
  14. Daras, P., Manolopoulou, S., Axenopoulos, A., 2012. Search and retrieval of rich media objects supporting multiple multimodal queries. IEEE Trans. Multim., 14(3)):734–746. http://dx.doi.org/10.1109/TMM.2011.2181343 CrossRefGoogle Scholar
  15. Davenport, T.H., Prusak, L., 1998. Working Knowledge: How Organizations Manage What They Know. Harvard Business School Press, Boston, p.5.Google Scholar
  16. Deng, J., Dong, W., Socher, R., et al., 2009. ImageNet: a large-scale hierarchical image database. IEEE Conf. on Computer Vision and Pattern Recognition, p.248–255. http://dx.doi.org/10.1109/CVPR.2009.5206848 Google Scholar
  17. Dong, X., Gabrilovich, E., Heitz, G., et al., 2014. Knowledge vault: a Web-scale approach to probabilistic knowledge fusion. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, p.601–610. http://dx.doi.org/10.1145/2623330.2623623 Google Scholar
  18. Fang, Q., Xu, C., Sang, J., et al., 2016. Folksonomy-based visual ontology construction and its applications. IEEE Trans. Multim., 18(4)):702–713. http://dx.doi.org/10.1109/TMM.2016.2527602 CrossRefGoogle Scholar
  19. Fellbaum, C., Miller, G., 1998. WordNet: an Electronic Lexical Database. MIT Press, Cambridge, MA.MATHGoogle Scholar
  20. Feng, F., Wang, X., Li, R., 2014. Cross-modal retrieval with correspondence autoencoder. ACM Int. Conf. on Multimedia, p.7–16. http://dx.doi.org/10.1145/2647868.2654902 Google Scholar
  21. Ferrucci, D., Levas, A., Bagchi, S., et al., 2013. Watson: beyond jeopardy! Artif. Intell., 199–200:93-105. http://dx.doi.org/10.1016/j.artint.2012.06.009 Google Scholar
  22. Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M., 2015. Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev., 43(1)):55–81. http://dx.doi.org/10.1007/s10462-012-9365-8 CrossRefGoogle Scholar
  23. Garfield, E., 2004. Historiographic mapping of knowledge domains literature. J. Inform. Sci., 30(2)):119–145. http://dx.doi.org/10.1177/0165551504042802 CrossRefGoogle Scholar
  24. Gibney, E., 2015. DeepMind algorithm beats people at classic video games. Nature, 518(7540):465–466.CrossRefGoogle Scholar
  25. Ginsberg, J., Mohebbi, M., Patel, R.S., et al., 2009. Detecting influenza epidemics using search engine query data. Nature, 457(7232):1012–1014.CrossRefGoogle Scholar
  26. Gong, Y., Ke, Q., Isard, M., et al., 2014. A multiview embedding space for modeling internet images, tags, and their semantics. Int. J. Comput. Vis., 106(2)):210–233. http://dx.doi.org/10.1007/s11263-013-0658-4 CrossRefGoogle Scholar
  27. Hochreiter, S., Schmidhuber, J., 1997. Long short-term memory. Neur. Comput., 9(8)):1735–1780. http://dx.doi.org/10.1162/neco.1997.9.8.1735 CrossRefGoogle Scholar
  28. Hodosh, M., Young, P., Hockenmaier, J., 2013. Framing image description as a ranking task: data, models and evaluation metrics. J. Artif. Intell. Res., 47(1)):853–899.MathSciNetMATHGoogle Scholar
  29. Hotelling, H., 1936. Relations between two sets of variates. Biometrika, 28(3-4):321–377. https://doi.org/10.1093/biomet/28.3-4.321 CrossRefMATHGoogle Scholar
  30. Hsu, F., 2002. Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. Princeton University Press, Princeton, USA.MATHGoogle Scholar
  31. Hua, Y., Wang, S., Liu, S., et al., 2014. TINA: cross-modal correlation learning by adaptive hierarchical semantic aggregation. IEEE Int. Conf. on Data Mining, p.190–199. http://dx.doi.org/10.1109/ICDM.2014.65 Google Scholar
  32. Jia, X., Gavves, E., Fernando, B., et al., 2015. Guiding long-short term memory for image caption generation. arXiv:1509.04942.Google Scholar
  33. Johnson, J., Krishna, R., Stark, M., et al., 2015. Image retrieval using scene graphs. IEEE Conf. on Computer Vision and Pattern Recognition, p.3668–3678. http://dx.doi.org/10.1109/CVPR.2015.7298990 Google Scholar
  34. Karpathy, A., Li, F.F., 2015. Deep visual-semantic alignments for generating image descriptions. IEEE Conf. on Computer Vision and Pattern Recognition, p.3128–3137. http://dx.doi.org/10.1109/CVPR.2015.7298932 Google Scholar
  35. Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. ImageNet: classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, p.1097–1105.Google Scholar
  36. Kulkarni, G., Premraj, V., Dhar, S., et al., 2011. Baby talk: understanding and generating simple image descriptions. IEEE Conf. on Computer Vision and Pattern Recognition, p.1601–1608. http://dx.doi.org/10.1109/CVPR.2011.5995466 Google Scholar
  37. Kumar, S., Sanderford, M., Gray, V.E., et al., 2012. Evolutionary diagnosis method for variants in personal exomes. Nat. Meth., 9(9)):855–856. http://dx.doi.org/10.1038/nmeth.2147 CrossRefGoogle Scholar
  38. Kuznetsova, P., Ordonezz, V., Berg, T.L., et al., 2014. TREETALK: composition and compression of trees for image descriptions. Trans. Assoc. Comput. Ling., 2:351–362.Google Scholar
  39. Lazaric, A., 2012. Transfer in reinforcement learning: a frame-work and a survey. In: Wiering, M., van Otterlo, M. (Eds.), Reinforcement Learning: State-of-the-Art. Springer Berlin Heidelberg, Berlin, p.143–173. http://dx.doi.org/10.1007/978-3-642-27645-3_5 CrossRefGoogle Scholar
  40. Lazer, D., Kennedy, R., King, G., et al., 2014. The parable of Google flu: traps in big data analysis. Science, 343(6176): 1203–1205. http://dx.doi.org/10.1126/science.1248506 CrossRefGoogle Scholar
  41. Lew, M.S., Sebe, N., Djeraba, C., et al., 2006. Content-based multimedia information retrieval: state of the art and challenges. ACM Trans. Multim. Comput. Commun. Appl., 2(1)):1–19. http://dx.doi.org/10.1145/1126004.1126005 CrossRefGoogle Scholar
  42. Lin, T., Pantel, P., Gamon, M., et al., 2012. Active objects: actions for entity-centric search. ACM Int. Conf. on World Wide Web, p.589–598. http://dx.doi.org/10.1145/2187836.2187916 Google Scholar
  43. Luo, G., Tang, C., 2008. On iterative intelligent medical search. ACM SIGIR Conf. on Research and Development in Information Retrieval, p.3–10. http://dx.doi.org/10.1145/1390334.1390338 Google Scholar
  44. Mao, X., Lin, B., Cai, D., et al., 2013. Parallel field alignment for cross media retrieval. ACM Int. Conf. on Multimedia, p.897–906. http://dx.doi.org/10.1145/2502081.2502087 Google Scholar
  45. McGurk, H., MacDonald, J., 1976. Hearing lips and seeing voices. Nature, 264(5588):746–748. http://dx.doi.org/10.1038/264746a0 CrossRefGoogle Scholar
  46. MIT Technology Review, 2014. Data driven healthcare. https://www.technologyreview.com/business-report/data-driven-health-care/free [Dec. 06, 2016].
  47. Mnih, V., Kavukcuoglu, K., Silver, D., 2015. Human-level control through deep reinforcement learning. Nature, 518(7540)):529–333. http://dx.doi.org/10.1038/nature14236 CrossRefGoogle Scholar
  48. Ngiam, J., Khosla, A., Kim, M., et al., 2011. Multimodal deep learning. Int. Conf. on Machine Learning, p.689–696.Google Scholar
  49. Ordonez, V., Kulkarni, G., Berg, T.L., 2011. Im2text: describing images using 1 million captioned photographs. Advances in Neural Information Processing Systems, p.1143–1151.Google Scholar
  50. Pan, Y.H., 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4)):409–413. http://dx.doi.org/10.1016/J.ENG.2016.04.018 CrossRefGoogle Scholar
  51. Pearl, J., 2000. Causality: Models, Reasoning and Inference. Cambridge University Press, Cambridge, UK.MATHGoogle Scholar
  52. Peng, Y., Huang, X., Qi, J., 2016a. Cross-media shared representation by hierarchical learning with multiple deep networks. Int. Joint Conf. on Artificial Intelligence, p.3846–3853.Google Scholar
  53. Peng, Y., Zhai, X., Zhao, Y., et al., 2016b. Semi-supervised cross-media feature learning with unified patch graph regularization. IEEE Trans. Circ. Syst. Video Technol., 26(3)):583–596. http://dx.doi.org/10.1109/TCSVT.2015.2400779 CrossRefGoogle Scholar
  54. Prabhu, N., Babu, R.V., 2015. Attribute-Graph: a graph based approach to image ranking. IEEE Int. Conf. on Computer Vision, p.1071–1079. http://dx.doi.org/10.1109/ICCV.2015.128 Google Scholar
  55. Radinsky, K., Davidovich, S., Markovitch, S., 2012. Learning causality for news events prediction. Int. Conf. on World Wide Web, p.909–918. http://dx.doi.org/10.1145/2187836.2187958 Google Scholar
  56. Rasiwasia, N., Costa Pereira, J., Coviello, E., et al.,Costa Pereira 2010. A new approach to cross-modal multimedia retrieval. ACM Int. Conf. on Multimedia, p.251–260. http://dx.doi.org/10.1145/1873951.1873987 Google Scholar
  57. Rasiwasia, N., Mahajan, D., Mahadevan, V., et al., 2014. Cluster canonical correlation analysis. Int. Conf. on Artificial Intelligence and Statistics, p.823–831.Google Scholar
  58. Rautaray, S.S., Agrawal, A., 2015. Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev., 43(1)):1–54. http://dx.doi.org/10.1007/s10462-012-9356-9 CrossRefGoogle Scholar
  59. Roller, S., Schulte im Walde, S., 2013. A multimodal LDA model integrating textual, cognitive and visual modalities. Conf. on Empirical Methods in Natural Language Processing, p.1146–1157.Google Scholar
  60. Sadeghi, F., Divvala, S.K., Farhadi, A., 2015. VisKE: visual knowledge extraction and question answering by visual verification of relation phrases. IEEE Conf. on Computer Vision and Pattern Recognition, p.1456–1464. http://dx.doi.org/10.1109/CVPR.2015.7298752 Google Scholar
  61. Singhal, A., 2012. Introducing the knowledge graph: things, not strings. Official Blog of Google.Google Scholar
  62. Socher, R., Lin, C., Ng, A.Y., et al., 2011. Parsing natural scenes and natural language with recursive neural networks. Int. Conf. on Machine Learning, p.129–136.Google Scholar
  63. Socher, R., Karpathy, A., Le, Q., et al., 2014. Grounded compositional semantics for finding and describing images with sentences. Trans. Assoc. Comput. Ling., 2:207–218.Google Scholar
  64. Srivastava, N., Salakhutdinov, R., 2012. Multimodal learning with deep Boltzmann machines. Advances in Neural Information Processing Systems, p.2222–2230.Google Scholar
  65. Suchanek, F., Weikum, G., 2014. Knowledge bases in the age of big data analytics. Proc. VLDB Endow., 7(13)):1713–1714. http://dx.doi.org/10.14778/2733004.2733069 CrossRefGoogle Scholar
  66. Uyar, A., Aliyu, F.M., 2015. Evaluating search features of Google Knowledge Graph and Bing Satori: entity types, list searches and query interfaces. Onl. Inform. Rev., 39(2)):197–213. http://dx.doi.org/10.1108/OIR-10-2014-0257 CrossRefGoogle Scholar
  67. Vinyals, O., Toshev, A., Bengio, S., et al., 2015. Show and tell: a neural image caption generator. IEEE Conf. on Computer Vision and Pattern Recognition, p.3156–3164. http://dx.doi.org/10.1109/CVPR.2015.7298935 Google Scholar
  68. Wang, D., Cui, P., Ou, M., et al., 2015. Learning compact hash codes for multimodal representations using orthogonal deep structure. IEEE Trans. Multim., 17(9)): 1404–1416. http://dx.doi.org/10.1109/TMM.2015.2455415 CrossRefGoogle Scholar
  69. Wang, W., Ooi, B.C., Yang, X., et al., 2014. Effective multi-modal retrieval based on stacked auto-encoders. Proc. VLDB Endow., 7(8)):649–660. http://dx.doi.org/10.14778/2732296.2732301 CrossRefGoogle Scholar
  70. Wang, Y., Wu, F., Song, J., et al., 2014. Multi-modal mutual topic reinforce modeling for cross-media retrieval. ACM Int. Conf. on Multimedia, p.307–316. http://dx.doi.org/10.1145/2647868.2654901 Google Scholar
  71. Wei, Y., Zhao, Y., Lu, C., et al., 2017. Cross-modal retrieval with CNN visual features: a new baseline. IEEE Trans. Cybern., 47(2)):449–460. http://dx.doi.org/10.1109/TCYB.2016.2519449 Google Scholar
  72. Wu, W., Xu, J., Li, H., 2010. Learning similarity function between objects in heterogeneous spaces. Technique Report MSR-TR-2010-86, Microsoft.Google Scholar
  73. Xu, K., Ba, J., Kiros, R., et al., 2015. Show, attend and tell: neural image caption generation with visual attention. Int. Conf. on Machine Learning, p.2048–2057.Google Scholar
  74. Yang, Y., Zhuang, Y., Wu, F., et al., 2008. Harmonizing hierarchical manifolds for multimedia document semantics understanding and cross-media retrieval. IEEE Trans. Multim., 10(3)):437–446. http://dx.doi.org/10.1109/TMM.2008.917359 CrossRefGoogle Scholar
  75. Yang, Y., Teo, C.L., Daume, H., et al., 2011. Corpus-guided sentence generation of natural images. Conf. on Empirical Methods in Natural Language Processing, p.444–454.Google Scholar
  76. Yang, Y., Nie, F., Xu, D., et al., 2012. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback. IEEE Trans. Patt. Anal. Mach. Intell., 34(4)):723–742. http://dx.doi.org/10.1109/TPAMI.2011.170 CrossRefGoogle Scholar
  77. Yuan, L., Pan, C., Ji, S., et al., 2014. Automated annotation of developmental stages of Drosophila embryos in images containing spatial patterns of expression. Bioinformatics, 30(2)):266–273. http://dx.doi.org/10.1093/bioinformatics/btt648 CrossRefGoogle Scholar
  78. Zhai, X., Peng, Y., Xiao, J., 2014. Learning cross-media joint representation with sparse and semi-supervised regularization. IEEE Trans. Circ. Syst. Video Technol., 24(6)):965–978. http://dx.doi.org/10.1109/TCSVT.2013.2276704 CrossRefGoogle Scholar
  79. Zhang, H., Yang, Y., Luan, H., et al., 2014a. Start from scratch: towards automatically identifying, modeling, and naming visual attributes. ACM Int. Conf. on Multimedia, p.187–196. http://dx.doi.org/10.1145/2647868.2654915 Google Scholar
  80. Zhang, H., Yuan, J., Gao, X., et al., 2014b. Boosting cross-media retrieval via visual-auditory feature analysis and relevance feedback. ACM Int. Conf. on Multimedia, p.953–956. http://dx.doi.org/10.1145/2647868.2654975 Google Scholar
  81. Zhang, H., Shang, X., Luan, H., et al., 2016. Learning from collective intelligence: feature learning using social images and tags. ACM Trans. Multim. Comput. Commun. Appl., 13(1)):1. http://dx.doi.org/10.1145/2978656 CrossRefGoogle Scholar
  82. Zhang, J., Wang, S., Huang, Q., 2015. Location-based parallel tag completion for geo-tagged social image retrieval. ACM Int. Conf. on Multimedia Retrieval, p.355–362.CrossRefGoogle Scholar
  83. Zhu, Y., Zhang, C., Ré, C., et al., 2015. Building a large-scale multimodal knowledge base system for answering visual queries. arXiv:1507.05670.Google Scholar

Copyright information

© Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature 2017

Authors and Affiliations

  • Yu-xin Peng
    • 1
  • Wen-wu Zhu
    • 2
  • Yao Zhao
    • 3
  • Chang-sheng Xu
    • 4
  • Qing-ming Huang
    • 5
  • Han-qing Lu
    • 4
  • Qing-hua Zheng
    • 6
  • Tie-jun Huang
    • 7
  • Wen Gao
    • 7
  1. 1.Institute of Computer Science and TechnologyPeking UniversityBeijingChina
  2. 2.Department of Computer Science and TechnologyTsinghua UniversityBeijingChina
  3. 3.Institute of Information ScienceBeijing Jiaotong UniversityBeijingChina
  4. 4.National Laboratory of Pattern Recognition, Institute of AutomationChinese Academy of SciencesBeijingChina
  5. 5.Key Laboratory of Intelligent Information Processing, Institute of Computing TechnologyChinese Academy of SciencesBeijingChina
  6. 6.Department of Computer Science and TechnologyXi’an Jiaotong UniversityXi’anChina
  7. 7.School of Electronics Engineering and Computer SciencePeking UniversityBeijingChina

Personalised recommendations