Exploring Salient Object Detection with Adder Neural Networks

Authors: Bo-Wen Yin, Zheng Lin

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on our empirical studies, we show that directly replacing the convolutions... Experiments on popular salient object detection benchmarks demonstrate that our proposed method... Experiment Setup. Implementation details. The implementation of the proposed method is based on the Py Torch (Paszke et al. 2019) framework. All the experiments are performed using the Adam (Kingma and Ba 2015) optimizer...
Researcher Affiliation Academia 1VCIP, College of Computer Science, Nankai University 2BNRist, Department of Computer Science and Technology, Tsinghua University
Pseudocode No The paper describes the method verbally and with network architecture diagrams (Figure 6) but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide explicit statements about open-sourcing code or links to a code repository. It only mentions the implementation framework (PyTorch).
Open Datasets Yes All the models are trained on the DUTS-TR dataset (Wang et al. 2017). To evaluate the performance of our proposed method, we conduct a series of experiments on five popular SOD datasets, i.e., ECSSD (Yan et al. 2013), DUT-O (Yang et al. 2013), PASCAL-S (Li et al. 2014), HKU-IS (Li and Yu 2015), DUTS-TE (Wang et al. 2017)... The backbone parameters are initialized with weights pretrained on the Image Net-1K (Deng et al. 2009).
Dataset Splits Yes All the models are trained on the DUTS-TR dataset (Wang et al. 2017). To evaluate the performance of our proposed method, we conduct a series of experiments on five popular SOD datasets, i.e., ECSSD (Yan et al. 2013), DUT-O (Yang et al. 2013), PASCAL-S (Li et al. 2014), HKU-IS (Li and Yu 2015), DUTS-TE (Wang et al. 2017), containing 1000, 5168, 850, 4447, 5,019 pairs of images and ground-truth saliency maps, respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No The implementation of the proposed method is based on the Py Torch (Paszke et al. 2019) framework. All the experiments are performed using the Adam (Kingma and Ba 2015) optimizer.
Experiment Setup Yes Implementation details. The implementation of the proposed method is based on the Py Torch (Paszke et al. 2019) framework. All the experiments are performed using the Adam (Kingma and Ba 2015) optimizer like (Yin et al. 2024a) with a batch size of 64. We train our network for 120 epochs as we found ANN-based models converge slower than CNN-based models. The learning rate is set to 2e-4 initially, and the cosine learning rate schedule is adopted. The backbone parameters are initialized with weights pretrained on the Image Net-1K (Deng et al. 2009), and the parameters in the decoder are initialized randomly before training. We only use simple horizontal flipping for data augmentation.