Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.
Pagabile anche con Carta della cultura giovani e del merito, 18App Bonus Cultura e Carta del Docente
Although sensor fusion is an essential prerequisite for autonomous driving, it entails a number of challenges and potential risks. For example, the commonly used deep fusion networks are lacking in interpretability and robustness. To address these fundamental issues, this book introduces the mechanism of deep fusion models from the perspective of uncertainty and models the initial risks in order to create a robust fusion architecture.
This book reviews the multi-sensor data fusion methods applied in autonomous driving, and the main body is divided into three parts: Basic, Method, and Advance. Starting from the mechanism of data fusion, it comprehensively reviews the development of automatic perception technology and data fusion technology, and gives a comprehensive overview of various perception tasks based on multimodal data fusion. The book then proposes a series of innovative algorithms for various autonomous driving perception tasks, to effectively improve the accuracy and robustness of autonomous driving-related tasks, and provide ideas for solving the challenges in multi-sensor fusion methods. Furthermore, to transition from technical research to intelligent connected collaboration applications, it proposes a series of exploratory contents such as practical fusion datasets, vehicle-road collaboration, and fusion mechanisms.
In contrast to the existing literature on data fusion and autonomous driving, this book focuses more on the deep fusion method for perception-related tasks, emphasizes the theoretical explanation of the fusion method, and fully considers the relevant scenarios in engineering practice. Helping readers acquire an in-depth understanding of fusion methods and theories in autonomous driving, it can be used as a textbook for graduate students and scholars in related fields or as a reference guide for engineers who wish to apply deep fusion methods.
Chapter 1: Introduction
1.1 Autonomous driving
1.2 Sensors
1.3 Perception
1.4 Multi-sensor fusion
1.5 Public datasets
1.5 Challenges
1.6 Summary
Chapter 2: Overview of Data Fusion Theory and Methods
2.1 Background
2.2 Data pre-processing
2.3 Model-based fusion
2.4 Learning-based fusion
2.5 Challenges and prospect
2.6 Summary
Chapter 3: Uncertainty in Fusion Networks
3.1 Formulate uncertainty in multimodal fusion
3.2 Model uncertainty in fusion
3.3 Data uncertainty in fusion
3.4 Redundancy and stability analysis
3.5 Challenges and prospect
3.6 Summary
Chapter 4: Generalized Fusion Methods
4.1 Background
4.2 Human-machine interactive fusion
4.3 Vehicle-road multi-view interactive fusion
4.4 Challenges and prospect
4.5 Summary
Chapter 5: Multi-sensor calibration and Localization
5.1 Background
5.2 Dataset and criterion
5.3 Multi-sensor fusion calibration
5.4 Multi-sensor fusion localization
5.5 Challenges and prospect
5.6 Summary
Chapter 6: Multi-sensor object detection
6.1 Background
6.2 Dataset and criterion
6.3 LiDAR-Image fusion object detection
6.4 Radar-LiDAR fusion object detection
6.5 Radar-Image fusion object detection
6.6 Lightweight fusion paradigm
6.7 Challenges and prospect
6.8 Summary
Chapter 7: Multi-sensor scene segmentation
7.1 Background
7.2 Dataset and criterion
7.3 Comparison of different fusion architecture in segmentation
7.4 Attention in multimodal fusion segmentation
7.5 Adaptive strategies in multimodal fusion segmentation
7.6 Challenges and prospect
7.7 Summary
Chapter 8: Multi-sensor fusion for three-dimensional transportation
8.1 The scene of three-dimensional transportation
8.2 Sequential model fused with motion information
8.3 Object detection based on stereoscopic motion view
8.4 Scene segmentation based on stereoscopic motion view
8.5 Challenges and prospect
8.6 Summary
Chapter 9: Platform for autonomous driving
9.1 Self-driving car
9.2 Simulation platform
9.3 Sensor evaluation and comparison
9.4 Creating datasets
Chapter 10: Conclusions
10.1 Summary
10.2 Future work
Prof. Xinyu Zhang is an associate professor at the School of Vehicle and Mobility, Tsinghua University. He was a research fellow at the University of Cambridge, UK, in 2008. Since 2014, he has served as Deputy Secretary General of the Chinese Association for Artificial Intelligence. As director of the Tsinghua Mengshi team, he invented the first amphibious autonomous flying car in China and proposed a new method of collaborative fusion for perception information and motion information in three-dimensional traffic. His research interests include multi-model fusion, unmanned ground vehicles, and flying cars.
Prof. Jun Li is a professor at the School of Vehicle and Mobility, Tsinghua University. He is the President of the China Society of Automotive Engineers and is also an Academician of the Chinese Academy of Engineering. Relying on the Intelligent Vehicle Design and Safety Technology Research Center, he led the team to focus on the core technology of intelligent driving, mainly to carry out the system engineering research on the integration of smart city-smart transportation-smart vehicle (SCSTSV). He focuses on the research of cutting-edge technologies, such as intelligent shared vehicle design, safety of the intended functionality, 5G vehicle equipment, and fusion perception, to overcome the core problems of intelligent driving and improve the core competitiveness of intelligent networked vehicles.
Dr. Zhiwei Li is a tutor of master’s students in Beijing University of Chemical Technology. In 2020, he studied as a postdoctoral fellow with Academician Jun Li at Tsinghua University. His main research interests include computer vision, intelligent perception and autonomous driving, and robot system architecture.
Prof. Huaping Liu is a professor at the Department of Computer Science and Technology, Tsinghua University. He serves as an associate editor for various journals, including IEEE Transactions on Automation Science and Engineering, IEEE Transactions on Industrial Informatics, IEEE Robotics and Automation Letters, Neurocomputing, and Cognitive Computation. He has served as an associate editor for ICRA and IROS and on the IJCAI, RSS, and IJCNN Program Committees. His main research interests are robotic perception and learning.
Mo Zhou is currently a doctoral candidate at the School of Vehicle and Mobility, Tsinghua University, supervised by Prof. Jun Li. She received MS degree in image and video communications and signal processing from the University of Bristol, Bristol, UK. Her research interests include intelligent vehicles, deep learning, environmental perception, and driving safety.
Dr. Li Wang is a postdoctoral fellow in the State Key Laboratory of Automotive Safety and Energy, and the School of Vehicle and Mobility, Tsinghua University. He received his PhD degree in mechatronic engineering at the State Key Laboratory of Robotics and System, Harbin Institute of Technology, in 2020. He was a visiting scholar at Nanyang Technology of University for two years. He is the author of more than 20 SCI/EI articles. His research interests include autonomous-driving perception, 3D robot vision, and multi-modal fusion.
Zhenhong Zou is an assistant researcher at the School of Vehicle and Mobility, Tsinghua University. He received his BS degree in Information and Computation Science from Beihang University and was subsequently a visiting student at the University of California, Los Angeles, USA, supervised by Prof. Deanna Needell. His research interests include autonomous driving and multi-sensor fusion.
Il sito utilizza cookie ed altri strumenti di tracciamento che raccolgono informazioni dal dispositivo dell’utente. Oltre ai cookie tecnici ed analitici aggregati, strettamente necessari per il funzionamento di questo sito web, previo consenso dell’utente possono essere installati cookie di profilazione e marketing e cookie dei social media. Cliccando su “Accetto tutti i cookie” saranno attivate tutte le categorie di cookie. Per accettare solo deterninate categorie di cookie, cliccare invece su “Impostazioni cookie”. Chiudendo il banner o continuando a navigare saranno installati solo cookie tecnici. Per maggiori dettagli, consultare la Cookie Policy.