Enhancing Low-Rank Adaptation with Recoverability-Based Reinforcement Pruning for Object Counting

Authors: Haojie Guo, Junyu Gao, Yuan Yuan

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four cross-scenario datasets demonstrate that the proposed method can remove redundant network parameters while ensuring network performance, with a maximum reduction of up to 63%.
Researcher Affiliation Collaboration Haojie Guo1, Junyu Gao1,2, Yuan Yuan1* 1Northwestern Polytechnical University 2Institute of Artificial Intelligence (Tele AI), China Telecom
Pseudocode No The paper describes the proposed method and its components using natural language and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the methodology is openly available or provided in supplementary materials.
Open Datasets Yes The CAR Parking lot (CARPK) and Pontifical Catholic University of Parana (PUCPR) datasets contain images of vehicle targets in parking lots from a drone s perspective... Shanghai Tech A (SHHA) and Shanghai Tech B (SHHB) are obtained by collecting images of densely populated crowds from the internet and images from fixed surveillance perspectives, respectively (Zhang et al. 2016).
Dataset Splits No The paper mentions the datasets used (CARPK, PUCPR, SHHA, SHHB) and states that 'The input images are uniformly resized to 720 × 720 pixels,' but it does not provide specific details regarding the splits for training, validation, or testing.
Hardware Specification Yes All experiments are conducted on a server with four RTX 3090 GPUs.
Software Dependencies No The optimizers for both the action classification network and the counting network are Adam W. However, specific version numbers for software libraries, programming languages, or other dependencies are not provided.
Experiment Setup Yes The learning rate for the proposed E3RP network decoder is set to 0.0001. The optimizers for both the action classification network and the counting network are Adam W. The cosine similarity threshold τ in Equation (5) is set to 0.99, and the base reward value is 10. The memory pool capacity of the deep Q-learning network is set to 100.