Exploiting Position Information in Convolutional Kernels for Structural Re-parameterization

Authors: Tianxiang Hao, Hui Chen, Guiguang Ding

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on four vision tasks, ranging from image-level tasks, and object-level tasks to pixel-level tasks. Experimental results show that PBConv can consistently achieve superior performance compared with existing state-of-the-art methods, improving plenty of architectures on various datasets and tasks.
Researcher Affiliation Academia Tianxiang Hao1,2 , Hui Chen3 and Guiguang Ding1 1School of Software, Tsinghua University 2Hangzhou Zhuoxi Institute of Brain and Intelligence 3Beijing National Research Center for Information Science and Technology (BNRist) EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes a "fast heuristic search algorithm" but does not provide it in a structured pseudocode or algorithm block format.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes We do evaluation on CIFAR [Krizhevsky et al., 2009] and Image Net [Deng et al., 2009] classification, Cityscapes [Cordts et al., 2016] segmentation, Go Pro [Nah et al., 2017] deblurring and COCO [Lin et al., 2014] detection.
Dataset Splits Yes We do evaluation on CIFAR [Krizhevsky et al., 2009] and Image Net [Deng et al., 2009] classification, Cityscapes [Cordts et al., 2016] segmentation, Go Pro [Nah et al., 2017] deblurring and COCO [Lin et al., 2014] detection. Models are trained with Image Net pre-trained backbone weights on Cityscapes.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU, CPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not specify any particular software dependencies with version numbers used for the experiments.
Experiment Setup No We first build a baseline and then replace its conv-BN sequence with ACB/DBB/PBConv, and train all models with identical configurations for a fair comparison.