An Attribute-based Method for Video Anomaly Detection

Authors: Tal Reiss, Yedid Hoshen

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated our method on three publicly available VAD datasets, using their training and test splits. Only test videos included anomalous events. We report the statistics of the datasets in Tab. 1. Our method achieves the highest performance on the three most popular public benchmarks. It simply consists of three simple representations and does not require training. Ablation study. We report in Tab. 3 the anomaly detection performance on the Ped2, Avenue and Shanghai Tech datasets of all attribute combinations.
Researcher Affiliation Academia Tal Reiss EMAIL School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel. Yedid Hoshen EMAIL School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel
Pseudocode No The paper describes the method using textual explanations and figures (e.g., Fig. 2 for an overview) rather than structured pseudocode or algorithm blocks. No section or figure is explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Our code is available at https://github.com/talreiss/Accurate-Interpretable-VAD.
Open Datasets Yes We evaluated our method on three publicly available VAD datasets, using their training and test splits. ... UCSD Ped2. This dataset (Mahadevan et al., 2010)... CUHK Avenue. This dataset (Lu et al., 2013)... Shanghai Tech Campus. This dataset (Liu et al., 2018a)...
Dataset Splits Yes We evaluated our method on three publicly available VAD datasets, using their training and test splits. Only test videos included anomalous events. We report the statistics of the datasets in Tab. 1. (Table 1 shows 'Total Train set' and 'Test set' frame counts for each dataset).
Hardware Specification Yes We carried out all our experiments on a NVIDIA RTX 2080 GPU.
Software Dependencies No The paper mentions several tools and models like 'Res Net50 Mask-RCNN', 'Flow Net2', 'Alpha Pose', and 'Vi T B-16 CLIP', but it does not specify software library versions (e.g., Python, PyTorch, TensorFlow, CUDA versions) that would be needed for replication.
Experiment Setup Yes Specifically for Ped2, Avenue, and Shanghai Tech, we set confidence thresholds of 0.5, 0.8, and 0.8. ... We use Hvelocity Wvelocity = 224 224 to rescale flow maps. ... We use B = 1 orientations for Ped2 and B = 8 orientations for Avenue and Shanghai Tech. ... When testing, for anomaly scoring we use k NN for the pose and deep representations with k = 1 nearest neighbors. For velocity, we use GMM with n = 5 Gaussians.