V2X-Real Dataset Release!!!

Recently, we have finished releasing the COMPLETE set of 📌V2X-Real📌 , the first large-scale, high-quality real-world dataset, carefully crafted for Vehicle-to-Everything (V2X) cooperative perception and other driving automation tasks. Many have reached out to me about such datasets and here it is. You will enjoy using it for your research!

This dataset features multiple connected agents—including two automated vehicles and two infrastructure nodes—delivering an unparalleled multi-view, multi-modal sensor data-stream. It includes diverse road users, especially a large number of pedestrians and other vulnerable road users. With over 1.2 million annotated 3D bounding boxes spanning 10 object categories, 33K LiDAR frames, and 171K multi-view camera data points, V2X-Real sets a new benchmark for automated and cooperative driving research. Please also dive into our four specialized sub-datasets tailored for diverse collaboration modes: Vehicle-Centric (VC), Infrastructure-Centric (IC), Vehicle-to-Vehicle (V2V), and Infrastructure-to-Infrastructure (I2I). It is time to boost your research with our comprehensive benchmarks and open-source codes, designed to accelerate advancements in multi-class, multi-agent V2X cooperative perception and driving automation. Let’s unlock limitless possibilities with V2X-Real! Please reach out if you are interested in collaboration.

For more details, please check our website: https://lnkd.in/ge4TzkVT

Paper (presented at ECCV2024 earlier this year): https://lnkd.in/gyDkGu5X