Equivariant Graph Learning for High-density Crowd Trajectories Modeling

Authors: Yang Liu, Zinan Zheng, Yu Rong, Jia Li

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 8 datasets from 5 different environments show that Crowd EGL outperforms existing models by a large margin. We conduct extensive experiments to evaluate the performance of Crowd EGL on eight datasets of five crowded environments. Experimental results show that it achieves significantly better generalization ability over the state-of-the-art models. Ablation studies demonstrate the effectiveness of our model designs.
Researcher Affiliation Collaboration Yang Liu EMAIL Hong Kong University of Science and Technology Hong Kong University of Science and Technology (Guangzhou) Zinan Zheng EMAIL Hong Kong University of Science and Technology (Guangzhou) Yu Rong EMAIL Alibaba DAMO Academy Jia Li EMAIL Hong Kong University of Science and Technology (Guangzhou)
Pseudocode No The paper describes the methodology using textual explanations and mathematical equations (e.g., Eq. 1, 3, 4, 5, 6, 7, 8, 9, 10, 11) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes To facilitate the research on high-density crowd modeling, we make our datasets and implementation available at Supplementary Material.
Open Datasets Yes Methods are evaluated on public pedestrian dynamic data1 that is built up by the Institute for Advanced Simulation 7:Civil Safety Research of Forschungszentrum Jülich (Cao et al., 2017). 1https://ped.fz-juelich.de/database/doku.php
Dataset Splits Yes For all datasets, 70%, 10%, and 20% are randomly split over time for training, validation, and testing, respectively.
Hardware Specification Yes All models are implemented based on Pytorch and Py G library (Fey & Lenssen, 2019), trained on Ge Force RTX 4090 GPU. Results are run on Ge Force RTX 4090 GPU.
Software Dependencies No All models are implemented based on Pytorch and Py G library (Fey & Lenssen, 2019). While PyTorch and Py G library are mentioned, specific version numbers for these software components are not provided.
Experiment Setup Yes Adam optimizer with learning rate 0.0005, batch size 100, the hidden dimension 64, weight decay 1 10 10, the message passing layer number 4 and the decoder layer number 2. All models are trained for 5000 epochs with an early stopping strategy of 100. For Crowd EGL, the strength ̴ of equivariance loss is turned from 0.1 to 1 with a step size of 0.1.