Skip to main content

Advances in Multimodal Information Retrieval and Generation

  • Book
  • © 2025

Overview

  • Provides a comprehensive overview of the state-of-the-art in multi-modal architectures and representation learning
  • Presents state-of-the-art techniques including neural models based on transformers and multi-modal learning techniques
  • Explores the foundations and algorithms that power MMIR and how information can be retrieved using multimodal queries

Part of the book series: Synthesis Lectures on Computer Vision (SLCV)

  • 134 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this book

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Hardcover Book USD 59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

This book provides an extensive examination of state-of-the-art methods in multimodal retrieval, generation, and the pioneering field of retrieval-augmented generation.  The work is rooted in the domain of Transformer-based models, exploring the complexities of blending and interpreting the intricate connections between text and images.  The authors present cutting-edge theories, methodologies, and frameworks dedicated to multimodal retrieval and generation, aiming to furnish readers with a comprehensive understanding of the current state and future prospects of multimodal AI.  As such, the book is a crucial resource for anyone interested in delving into the intricacies of multimodal retrieval and generation.  Serving as a bridge to mastering and leveraging advanced AI technologies in this field, the book is designed for students, researchers, practitioners, and AI aficionados alike, offering the tools needed to expand the horizons of what can be achieved in multimodal artificial intelligence.

Keywords

Table of contents (6 chapters)

Authors and Affiliations

  • ASU School of Computing, Informatics, and Decision Systems, Arizona State University, Tempe, USA

    Man Luo, Tejas Gokhale, Neeraj Varshney, Yezhou Yang, Chitta Baral

About the authors

Man Luo, Ph.D. is a Research Fellow at Mayo Clinic, Arizona.  She received her Ph.D. at ASU in 2023. Her research interests lie in Natural Language Processing (NLP) and Computer Vision (CV) with a specific focus on open-domain information retrieval under multi-modality settings and Retrieval-Augmented Generation Models.  She has published first author at top conferences such as AAAI, ACL and EMNLP. She serves as the guest editor of PLOS Digital Medicine Journal. She has served as reviewer for AAAI, IROS, EMNLP, NAACL, ACL conferences.  Dr. Luo is an organizer of the ODRUM workshops at CVPR 2022 and CVPR 2023 and Multimodal4Health at ICHI 2024. 

Tejas Gokhale, Ph.D., is an Assistant Professor at the University of Maryland, Baltimore County.  He received his Ph.D. from Arizona State University in 2023, M.S. from Carnegie Mellon University in 2017, and B.E.(Honours) from Birla Institute of Technology and Science, Pilani in 2015.  Dr. Gokhale is a computer vision researcher working on robust visual understanding with a focus on connection between vision and language, semantic data engineering, and active inference.  His research draws inspiration from the principles of perception, communication, learning, and reasoning.  He is an organizer of the ODRUM workshops at CVPR 2022 and CVPR 2023, SERUM tutorial at WACV 2023, and RGMV tutorial at WACV 2024. 

Neeraj Varshney is a Ph.D. candidate at ASU and works in natural language processing, primarily focusing on improving the efficiency and reliability of NLP models. He has published multiple papers in top-tier NLP and AI conferences including ACL, EMNLP, EACL, NAACL, and AAAI and is a recipient of the SCAI Doctoral Fellowship, GPSA Outstanding Research Award, and Jumpstart Research Grant.  He has served as a reviewer for several conferences including ACL, EMNLP, EACL, and IJCAI and has also been selected as an outstanding reviewer by EACL'23 conference.

Yezhou Yang, Ph.D., is an Associate Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University.  He received his Ph.D. from University of Maryland.  His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots. 

Chitta Baral, Ph.D., is a Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University and received his Ph.D. from University of Maryland. His primary interests lie in Natural Language Processing (NLP), Computer Vision (CV), the intersection of NLP and CV, and Knowledge Representation and Reasoning.Chitta Baral is a Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University, and received his PhD from University of Maryland. His primary interests lie in Natural Language Processing (NLP), Computer Vision (CV), the intersection of NLP and CV, and Knowledge Representation and Reasoning.

Bibliographic Information

Publish with us