No Identity, no problem: Motion through detection for people tracking
Authors: Martin Engilberge, Friedrich Wilke Grosche, Pascal Fua
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our approach delivers state-of-the-art results for singleand multi-view multitarget tracking on the MOT17 and WILDTRACK datasets. |
| Researcher Affiliation | Academia | Martin Engilberge EMAIL Computer Vision Laboratory, EPFL F. Wilke Grosche EMAIL Computer Vision Laboratory, EPFL Pascal Fua EMAIL Computer Vision Laboratory, EPFL |
| Pseudocode | No | The paper describes the algorithm in text and formulas within Section 3.2, but it does not present a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Code can be found at https://github.com/cvlab-epfl/noid-nopb. |
| Open Datasets | Yes | We use the challenging single-view MOT17 dataset (Milan et al., 2016) and multi-view WILDTRACK dataset (Chavdarova et al., 2018) to demonstrate our model s ability to predict accurate human motion without requiring any motion annotations. |
| Dataset Splits | Yes | Since the test set is private we follow the train/val split of Zhou et al. (2020). |
| Hardware Specification | Yes | Our modes are implemented in Pytorch (Paszke et al., 2019) and trained on a single NVIDIA A100 GPU. |
| Software Dependencies | No | Our modes are implemented in Pytorch (Paszke et al., 2019) and trained on a single NVIDIA A100 GPU. We use the existing implementation of MMDetection (Chen et al., 2019) to develop our training pipeline. The text mentions software like Pytorch and MMDetection but does not provide specific version numbers for these or other key software components. |
| Experiment Setup | Yes | The model is trained on WILDTRACK using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.0001 and a batch size of one. The learning rate is halved after epochs 20, 40 and 60. The reconstruction hyperparameter λr is initialized to 0.8, during training it is increased at the end of every epoch by 0.08 until it caps out at 5. ... The hyperparameters in the loss L are set to λfb = 0.05 and λse = 1. A summary of the hyperparameters can be found in Table A.6. |