Skip to main content
Log in

Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically coupled object and word detectors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Alishahi, A., Barking, M., & Chrupala, G. (2017). Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of the ACL conference on natural language learning (CoNLL).

  • Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Lawrence, Z., et al. (2015). VQA: Visual question answering. In Proceedings of the IEEE international conference on computer vision (ICCV).

  • Arandjelovic, R., & Zisserman, A. (2017). Look, listen, and learn. In Proceedings of the IEEE international conference on computer vision (ICCV).

  • Aytar, Y., Vondrick, C., & Torralba, A. (2016). Soundnet: Learning sound representations from unlabeled video. In Proceedings of the neural information processing systems (NeurIPS).

  • Bergamo, A., Bazzani, L., Anguelov, D., & Torresani, L. (2014). Self-taught object localization with deep networks. CoRR. arXiv:1409.3964.

  • Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., & Shah, R. (1994). Signature verification using a “siamese” time delay neural network. In J. D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in neural information processing systems (Vol. 6, pp. 737–744). Burlington: Morgan-Kaufmann.

    Google Scholar 

  • Cho, M., Kwak, S., Schmid, C., & Ponce, J. (2015). Unsupervised object discovery and localization in the wild: Part-based matching with bottom-up region proposals. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR).

  • Chrupala, G., Gelderloos, L., & Alishahi, A. (2017). Representations of language in a model of visually grounded speech signal. In Proceedings of the annual meeting of the association for computational linguistics (ACL).

  • Cinbis, R., Verbeek, J., & Schmid, C. (2016). Weakly supervised object localization with multi-fold multiple instance learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(1), 189–203.

    Article  Google Scholar 

  • de Vries, H., Strub, F., Chandar, S., Pietquin, O., Larochelle, H., & Courville, A. C. (2017). Guesswhat?! Visual object discovery through multi-modal dialogue. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Doersch, C., Gupta, A., & Efros, A. A. (2015). Unsupervised visual representation learning by context prediction. CoRR. arXiv:1505.05192.

  • Drexler, J., & Glass, J. (2017). Analysis of audio-visual features for unsupervised speech recognition. In Proceedings of the grounded language understanding workshop.

  • Dupoux, E. (2018). Cognitive science in the era of artificial intelligence: A roadmap for reverse-engineering the infant language-learner. Cognition, 173, 43–59.

    Article  Google Scholar 

  • Faghri, F., Fleet, D. J., Kiros, J. R., & Fidler, S. (2018). Vse++: Improving visual-semantic embeddings with hard negatives. In Proceedings of the British machine vision conference (BMVC).

  • Fang, H., Gupta, S., Iandola, F., Rupesh, S., Deng, L., Dollar, P., et al. (2015). From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

  • Fellbaum, C. (1998). WordNet: An electronic lexical database. Bradford: Bradford Books.

    Book  Google Scholar 

  • Gao, H., Mao, J., Zhou, J., Huang, Z., & Yuille, A. (2015). Are you talking to a machine? Dataset and methods for multilingual image question answering. In Proceedings of the neural information processing systems (NeurIPS).

  • Gelderloos, L., & Chrupala, G. (2016). From phonemes to images: Levels of representation in a recurrent neural model of visually-grounded language learning. arXiv:1610.03342.

  • Gemmeke, J. F., Ellis, D. P. W., Freedman, D., Jansen, A., Lawrence, W., Moore, R. C., et al. (2017). Audio set: An ontology and human-labeled dataset for audio events. In Proceedings of the international conference on acoustics, speech and signal processing (ICASSP).

  • Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2013). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Guérin, J., Gibaru, O., Thiery, S., & Nyiri, E. (2017). CNN features are also great at unsupervised classification. CoRR. arXiv:1707.01700.

  • Harwath, D., & Glass, J. (2017). Learning word-like units from joint audio-visual analysis. In Proceedings of the annual meeting of the association for computational linguistics (ACL).

  • Harwath, D., Recasens, A., Surís, D., Chuang, G., Torralba, A., & Glass, J. (2018). Jointly discovering visual objects and spoken words from raw sensory input. In Proceedings of the IEEE European conference on computer vision (ECCV).

  • Harwath, D., Torralba, A., & Glass, J. R. (2016). Unsupervised learning of spoken language with visual context. In Proceeding of the neural information processing systems (NeurIPS).

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep residual learning for image recognition. CoRR. arXiv:1512.03385.

  • Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the international conference on machine learning (ICML).

  • Jansen, A., Church, K., & Hermansky, H. (2010). Toward spoken term discovery at scale with zero resources. In Proceedings of the annual conference of international speech communication association (INTERSPEECH).

  • Jansen, A., Plakal, M., Pandya, R., Ellis, D. P., Hershey, S., Liu, J., et al. (2018). Unsupervised learning of semantic audio representations. In Proceedings of the international conference on acoustics, speech and signal processing (ICASSP).

  • Jansen, A., & Van Durme, B. (2011). Efficient spoken term discovery using randomized algorithms. In Proceedings of the IEEE workshop on automfatic speech recognition and understanding (ASRU).

  • Johnson, J., Karpathy, A., & Fei-Fei, L. (2016). Densecap: Fully convolutional localization networks for dense captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Kamper, H., Elsner, M., Jansen, A., & Goldwater, S. (2015). Unsupervised neural network based feature extraction using weak top-down constraints. In Proceedings of the international conference on acoustics, speech and signal processing (ICASSP).

  • Kamper, H., Jansen, A., & Goldwater, S. (2016). Unsupervised word segmentation and lexicon discovery using acoustic word embeddings. IEEE Transactions on Audio, Speech and Language Processing, 24(4), 669–679.

    Article  Google Scholar 

  • Kamper, H., Settle, S., Shakhnarovich, G., & Livescu, K. (2017). Visually grounded learning of keyword prediction from untranscribed speech. In Proceedings of the annual conference of international speech communication association (INTERSPEECH).

  • Karpathy, A., & Fei-Fei, L. (2015). Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Proceedings of the neural information processing systems (NeurIPS).

  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.

    Article  Google Scholar 

  • Lee, C., & Glass, J. (2012). A nonparametric Bayesian approach to acoustic model discovery. In Proceedings of the annual meeting of the association for computational linguistics (ACL).

  • Lewis, M. P., Simon, G. F., & Fennig, C. D. (2016). Ethnologue: Languages of the World (19th ed.). SIL International. Online version: http://www.ethnologue.com.

  • Lin, T., Marie, M., Belongie, S., Bourdev, L., Girshick, R., Perona, P., et al. (2015). Microsoft COCO: Common objects in context. In arXiv:1405.0312.

  • Malinowski, M., & Fritz, M. (2014). A multi-world approach to question answering about real-world scenes based on uncertain input. In Proceedings of the neural information processing systems (NeurIPS).

  • Malinowski, M., Rohrbach, M., & Fritz, M. (2015). Ask your neurons: A neural-based approach to answering questions about images. In Proceedings of the IEEE international conference on computer vision (ICCV).

  • Ondel, L., Burget, L., & Cernocky, J. (2016) Variational inference for acoustic unit discovery. In 5th Workshop on spoken language technology for under-resourced language.

  • Owens, A., Isola, P., McDermott, J. H., Torralba, A., Adelson, E. H., & Freeman, W. T. (2016a) Visually indicated sounds. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Owens, A., Wu, J., McDermott, J. H., Freeman, W. T., & Torralba, A. (2016b). Ambient sound provides supervision for visual learning. In Proceedings of the IEEE European conference on computer vision (ECCV).

  • Park, A., & Glass, J. (2008). Unsupervised pattern discovery in speech. IEEE Transactions on Audio, Speech and Language Processing, 16(1), 186–197.

    Article  Google Scholar 

  • Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016) You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Reed, S. E., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., & Lee, H. (2016). Generative adversarial text to image synthesis. CoRR. arXiv:1605.05396.

  • Ren, M., Kiros, R., & Zemel, R. (2015). Exploring models and data for image question answering. In Proceedings of the neural information processing systems (NeurIPS).

  • Renshaw, D., Kamper, H., Jansen, A., & Goldwater, S. (2015). A comparison of neural network methods for unsupervised representation learning on the zero resource speech challenge. In Proceedings of the annual conference of international speech communication association (INTERSPEECH).

  • Roy, D. (2003). Grounded spoken language acquisition: Experiments in word learning. IEEE Transactions on Multimedia, 5(2), 197–209.

    Article  Google Scholar 

  • Roy, D., & Pentland, A. (2002). Learning words from sights and sounds: A computational model. Cognitive Science, 26, 113–146.

    Article  Google Scholar 

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252. https://doi.org/10.1007/s11263-015-0816-y.

    Article  MathSciNet  Google Scholar 

  • Russell, B., Efros, A., Sivic, J., Freeman, W., & Zisserman, A. (2006). Using multiple segmentations to discover objects and their extent in image collections. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Shih, K. J., Singh, S., & Hoiem, D. (2015). Where to look: Focus regions for visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. CoRR. arXiv:1409.1556.

  • Spelke, E. S. (1990). Principles of object perception. Cognitive Science, 14(1), 29–56. https://doi.org/10.1016/0364-0213(90)90025-R.

    Article  Google Scholar 

  • Thiolliere, R., Dunbar, E., Synnaeve, G., Versteegh, M., & Dupoux, E. (2015). A hybrid dynamic time warping-deep neural network architecture for unsupervised acoustic modeling. In Proceedings of the annual conference of international speech communication association (INTERSPEECH).

  • Thomee, B., Shamma, D. A., Friedland, G., Elizalde, B., Ni, K., Poland, D., et al. (2015). The new data and new challenges in multimedia research. CoRR. arXiv:1503.01817.

  • Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2015). Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Weber, M., Welling, M., & Perona, P. (2010). Towards automatic discovery of object categories. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., et al. (2015). Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the international conference on machine learning (ICML).

  • Zhang, T., Ramakrishnan, R., & Livny, M. (1996). Birch: an efficient data clustering method for very large databases. In ACM SIGMOD international conference on management of data (pp. 103–114).

  • Zhang, Y., Salakhutdinov, R., Chang, H. A., & Glass, J. (2012). Resource configurable spoken query detection using deep Boltzmann machines. In Proceedings of the international conference on acoustics, speech and signal processing (ICASSP).

  • Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2015). Object detectors emerge in deep scene CNNs. In Proceedings of the international conference on learning representations (ICLR).

  • Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

  • Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., & Oliva, A. (2014). Learning deep features for scene recognition using places database. In Proceedings of the neural information processing systems (NeurIPS).

  • Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., & Torralba, A. (2017). Scene parsing through ADE20K dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Harwath.

Additional information

Communicated by M. Hebert.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors would like to thank Toyota Research Institute, Inc. for supporting this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Harwath, D., Recasens, A., Surís, D. et al. Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input. Int J Comput Vis 128, 620–641 (2020). https://doi.org/10.1007/s11263-019-01205-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-019-01205-0

Keywords

Navigation