TSC-Net: Prediction of Pedestrian Trajectories by Trajectory-Scene-Cell Classification

Authors: BO HU, Tat-Jen Cham

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comparative experiments show that TSC-Net achieves the SOTA performance on several datasets with most of the metrics. Especially for the goal estimation, TSC-Net is demonstrated better on predicting goals for trajectories with irregular speed. ... We demonstrate our approach outperforms most of existing methods in two datasets.
Researcher Affiliation Academia Bo Hu, Tat-Jen Cham College of Computing and Data Science Nanyang Technological University 50 Nanyang Ave, Block N4, Singapore EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual explanations and a framework diagram (Figure 2), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Source code of the TSC-Net is released at github.com/hubovc/TSC-Net.
Open Datasets Yes Stanford Drone Dataset (SDD) Robicquet et al. (2016) is a large-scale benchmark which contains more than 11,000 pedestrians with 20 different scenes. ... Intersection Drone Dataset (In D) Bock et al. (2020) contains about 10,000 pedestrians in 4 different road intersection scenes. ETH-UCY dataset Lerner et al. (2007); Pellegrini et al. (2009) includes 5 subsets
Dataset Splits Yes Prediction Settings in the experiments include short-term prediction and long-term prediction. Most of previous works focus on the short-term setting with T=20 and τ=8, where the source videos are down-sampled to 2.5 fps. The short-term setting is applied in the experiments on SDD and ETH-UCY datasets. Following Mangalam et al. (2021), we apply the long-term setting T=35 and τ=5 with 1 fps frame rate, which is applied in the experiment on SDD and in D. ... The ETH-UCY dataset ... the evaluation follows the leave-one-out validation strategy over 5 subsets.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions general software components and frameworks like Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNN), Transformers, and Multi-Layer Perceptron (MLP), but does not specify any particular software libraries with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup No The paper describes the general architecture and loss function (Equation 11) with weights λ and α but does not provide their specific values or other concrete hyperparameters like learning rate, batch size, or optimizer settings for training.