GGS: Generalizable Gaussian Splatting for Lane Switching in Autonomous Driving

Authors: Huasong Han, Kaixuan Zhou, Xiaoxiao Long, Yusen Wang, Chunxia Xiao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive validation of our method, compared to existing approaches, demonstrates state-of-the-art performance. We conduct extensive experiments on a wide range of scenarios to validate the effectiveness of our algorithm, and achieve state-of-the-art street novel view synthesis even without Li DAR. From Table 1, the methods based on 3D Gaussian Splatting, such as Gaussian Pro and DC-Gaussian, generate slightly better quality than other methods based on neural radiation fields. However, in some scenes, the rendering quality is inferior, and our model performs better. As illustrated in Figure 5, Gaussian Pro and DC-Gaussian fail to capture details such as tree leaves and utility poles. The comparison methods of different models for lane switching are shown in Figure 6. Compared to other models, our method demonstrates excellent overall rendering quality and lane switching quality. To demonstrate the effectiveness of the virtual lane generation module, we use FID (Heusel et al. 2017) to conduct lane-switching experiments on different models, as shown in Ta- ble 3. FID@LEFT and FID@RIGHT represent the distance between the rendered images of the left and right lanes and the GT. The qualitative experimental results are illustrated in Figure 6. Our model achieves high rendering quality while ensuring that quality remains unaffected during lane switching, with quantitative results shown in Table 2 and qualitative results shown in Figure 8.
Researcher Affiliation Collaboration Huasong Han1*, Kaixuan Zhou2*, Xiaoxiao Long3, Yusen Wang1, Chunxia Xiao1 1School of Computer Science, Wuhan University, Wuhan, China, 2Huawei Technologies Riemann Lab, Wuhan, Hubei, China, 3The University of Hong Kong, Hong Kong, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes its methodology in detailed prose and includes figures illustrating the overall framework and concepts, but it does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not contain any explicit statements about releasing code, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets Yes Evaluation on KITTI and Brno Urban From Table 1... We train the model on the KITTI dataset and test it on the Brno Urban dataset (Ligocki, Jelinek, and Zalud 2020).
Dataset Splits No The paper mentions using the KITTI and Brno Urban datasets for evaluation and testing on different scenarios (e.g., 'KITTI Residential', 'KITTI Road', 'KITTI City', 'Left side view', 'Left front side view', 'Right side view'), but it does not specify the exact percentages or counts for training, validation, or test splits, nor does it reference any standard predefined splits for reproduction.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions several frameworks and tools like 'Agisoft Metashape (met 2019)', 'Stable Diffusion framework (Rombach et al. 2022)', 'Variational Auto Encoder (Kingma and Welling 2013)', and 'CLIP (Radford et al. 2021)', but it does not list specific software libraries or programming languages with their version numbers that are critical for replicating the implementation of their proposed GGS method.
Experiment Setup No The paper describes the overall methodology, loss functions, and modules, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or training configurations.