Attack-inspired Calibration Loss for Calibrating Crack Recognition
Authors: Zhuangzhuang Chen, Qiangyu Chen, Jiahao Zhang, Zhiliang Lin, Xingyu Feng, Jie Chen, Jianqiang Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our AICL outperforms the state-of-art calibration methods on various benchmark datasets including CRACK2019, SDNET2018, and our BRIDGE2024. To answer the above question, we present experiments assessing the relationship between confidence and accuracy on crack recognition tasks. Experiments In this section, we present the essential experimental setup. |
| Researcher Affiliation | Academia | The National Engineering Laboratory of Big Data System Computing Technology Shenzhen University, Shenzhen 518060, China EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Attack-inspired correctness estimation method (ACE) Input: sample xi, feature extractor F, linear classifier C, cross-entropy (CE) loss LCE, class num C, label set Y = {1, 2, 3, .., C}, maximum attack number K, step size γ, perturbation bound ϵ 1: Feature extraction: Fxi = F(xi) 2: Initial pseudo label: ˆy0 = argmaxk Y C(Fxi) 3: Adversarial feature initialize: F(0) xi Fxi 4: Attack number initialization : t 0 5: while K >0 do 6: = ΠB[F(t) xi ,ϵ](γ sign( F(t) xi LCE(C(F(t) xi ), ˆy0))) 7: F(t+1) xi = F(t) xi + 8: if argmaxk Y C(F(t+1) xi ) == ˆy0 then 9: t t + 1 10: else 11: κ(xi) t + 1, K = 0 12: end if 13: K K 1 14: end while Output: Correctness degree κ(xi) |
| Open Source Code | No | Datasets https://github.com/cheny124800/AICL - This line explicitly labels the provided URL as "Datasets", without explicitly stating that it also hosts source code for the methodology. |
| Open Datasets | Yes | Datasets https://github.com/cheny124800/AICL; CRACK2019 (Zhang et al. 2016; Ozgenel and Sorguc 2018). ... This dataset is available at 1. 1https://data.mendeley.com/datasets/5y9wdsg2zt/2; SDNET2018 (Dorafshan, Thomas, and Maguire 2018). |
| Dataset Splits | Yes | This dataset is divided into training set, validation set, and test set at the ratio of 3 : 1 : 1. (This statement is repeated for BRIDGE2024, CRACK2019, and SDNET2018 datasets) |
| Hardware Specification | No | No specific hardware details for running experiments (e.g., GPU models, CPU types) are provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA) are provided in the paper. |
| Experiment Setup | Yes | For all of these datasets, the total training epochs is set as 40 with an initial learning rate 0.01, and reduced by a factor of 10 after 15, 25 epochs. Meanwhile, we adopt mini-batch Adam as the optimizer with the mini-batch size 64. According to the sensitivity study, the maximum attack step K is set to 5. During the initial period of the training epochs, the attack numbers among different samples are less informative when the classifier is not properly learned. For this reason, the initial 10 epochs are burn-in period, in which we only adopt CE loss for the supervision. |