Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.
Pagabile anche con Carta della cultura giovani e del merito, 18App Bonus Cultura e Carta del Docente
This book provides an extensive examination of state-of-the-art methods in multimodal retrieval, generation, and the pioneering field of retrieval-augmented generation. The work is rooted in the domain of Transformer-based models, exploring the complexities of blending and interpreting the intricate connections between text and images. The authors present cutting-edge theories, methodologies, and frameworks dedicated to multimodal retrieval and generation, aiming to furnish readers with a comprehensive understanding of the current state and future prospects of multimodal AI. As such, the book is a crucial resource for anyone interested in delving into the intricacies of multimodal retrieval and generation. Serving as a bridge to mastering and leveraging advanced AI technologies in this field, the book is designed for students, researchers, practitioners, and AI aficionados alike, offering the tools needed to expand the horizons of what can be achieved in multimodal artificial intelligence.
Preface.- Motivation and Background.- Review: Methods for Information Retrieval under Single Modality Setting.- Text IR.- Image IR.- Audio IR.- Review: Multimodal Representation Learning.- Evaluation Methods.- Information Retrieval for Multi-modality Setting.- Conclusions and Future Directions.
Man Luo, Ph.D. is a Research Fellow at Mayo Clinic, Arizona. She received her Ph.D. at ASU in 2023. Her research interests lie in Natural Language Processing (NLP) and Computer Vision (CV) with a specific focus on open-domain information retrieval under multi-modality settings and Retrieval-Augmented Generation Models. She has published first author at top conferences such as AAAI, ACL and EMNLP. She serves as the guest editor of PLOS Digital Medicine Journal. She has served as reviewer for AAAI, IROS, EMNLP, NAACL, ACL conferences. Dr. Luo is an organizer of the ODRUM workshops at CVPR 2022 and CVPR 2023 and Multimodal4Health at ICHI 2024.
Tejas Gokhale, Ph.D., is an Assistant Professor at the University of Maryland, Baltimore County. He received his Ph.D. from Arizona State University in 2023, M.S. from Carnegie Mellon University in 2017, and B.E.(Honours) from Birla Institute of Technology and Science, Pilani in 2015. Dr. Gokhale is a computer vision researcher working on robust visual understanding with a focus on connection between vision and language, semantic data engineering, and active inference. His research draws inspiration from the principles of perception, communication, learning, and reasoning. He is an organizer of the ODRUM workshops at CVPR 2022 and CVPR 2023, SERUM tutorial at WACV 2023, and RGMV tutorial at WACV 2024.
Neeraj Varshney is a Ph.D. candidate at ASU and works in natural language processing, primarily focusing on improving the efficiency and reliability of NLP models. He has published multiple papers in top-tier NLP and AI conferences including ACL, EMNLP, EACL, NAACL, and AAAI and is a recipient of the SCAI Doctoral Fellowship, GPSA Outstanding Research Award, and Jumpstart Research Grant. He has served as a reviewer for several conferences including ACL, EMNLP, EACL, and IJCAI and has also been selected as an outstanding reviewer by EACL'23 conference.
Yezhou Yang, Ph.D., is an Associate Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University. He received his Ph.D. from University of Maryland. His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots.
Chitta Baral, Ph.D., is a Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University and received his Ph.D. from University of Maryland. His primary interests lie in Natural Language Processing (NLP), Computer Vision (CV), the intersection of NLP and CV, and Knowledge Representation and Reasoning.Chitta Baral is a Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University, and received his PhD from University of Maryland. His primary interests lie in Natural Language Processing (NLP), Computer Vision (CV), the intersection of NLP and CV, and Knowledge Representation and Reasoning.
Il sito utilizza cookie ed altri strumenti di tracciamento che raccolgono informazioni dal dispositivo dell’utente. Oltre ai cookie tecnici ed analitici aggregati, strettamente necessari per il funzionamento di questo sito web, previo consenso dell’utente possono essere installati cookie di profilazione e marketing e cookie dei social media. Cliccando su “Accetto tutti i cookie” saranno attivate tutte le categorie di cookie. Per accettare solo deterninate categorie di cookie, cliccare invece su “Impostazioni cookie”. Chiudendo il banner o continuando a navigare saranno installati solo cookie tecnici. Per maggiori dettagli, consultare la Cookie Policy.