Privacy-Preserving V2X Collaborative Perception Integrating Unknown Collaborators
Authors: Bin Lu, Xinyu Xiao, Changzhou Zhang, Yang Zhou, Zhiyu Xiang, Hangguan Shan, Eryun Liu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the challenging DAIR-V2X and V2V4Real demonstrate that: 1) MSD achieves remarkable performance, outperforming others by at least 2.8% and 6.7% on DAIR-V2X and V2V4Real, respectively; 2)After domain adaptation, it significantly outperforms the No Fusion, Late Fusion scenarios and can approach or even surpass the performance of joint training. |
| Researcher Affiliation | Collaboration | 1Zhejiang University 2Ant Group 3Hangzhou Geely Automobile Digital Technology Co., LTD 4Hangzhou Fenghua Technology Co., LTD |
| Pseudocode | No | The paper describes the proposed method using architectural diagrams (Figure 2, Figure 3, Figure 4) and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We employ the challenging DAIR-V2X (Yu et al. 2022) for evaluating our method and other SOTA approaches. ... V2V4Real (Xu et al. 2023b) is the first largescale real-world dataset for vehicle-to-vehicle cooperative perception in autonomous driving. |
| Dataset Splits | No | The paper mentions using DAIR-V2X and V2V4Real datasets and describes how agents are selected during training and inference, but it does not provide specific train/test/validation split percentages or sample counts for these datasets. |
| Hardware Specification | Yes | All models are trained end-to-end for for 60 epochs on RTX 3090. |
| Software Dependencies | No | The paper mentions using Adam for optimization but does not provide specific software dependencies with version numbers for libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | An initial learning rate of 0.001 is selected, and is multiplied by 0.1 every 20 epochs during training. Early stop is used to find the best epoch. In all experiments, λreg is set to 2, λcls is set to 1 and Ns is set to 9. |