Adversarial Learning Under Hybrid Perturbations for Robust Acute Lymphoblastic Leukemia Classification
Authors: Jie Chen, Xinyuan Liu, Xintong Liu, Jianqiang Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This work is the first to identify the poor robustness issue in existing acute lymphocyte classifications models (Section 4.1). It is also verified in the real world dataset to show the effectiveness of proposed framework(Section 4.2). The proposed hybrid adversarial training strategy is tested on the public acute lymphoblastic leukemia dataset and found that it outperformed existing acute lymphoblastic cell classification models. |
| Researcher Affiliation | Academia | Jie Chen1, Xinyuan Liu1, Xintong Liu1, Jianqiang Li1,2* 1College of Computer Science and Software Engineering, Shenzhen University, China 2National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, China EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Hybrid-Perturbation Adversarial Training (HPAT) |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology is provided, nor does it include a link to a code repository. |
| Open Datasets | Yes | The acute lymphoblastic leukemia dataset C NMC from the IEEE ISBI2019 B-ALL benign and malignant cell classification challenge(Gupta et al. 2020), which contains a large number of labeled images of begin and malignant cells. |
| Dataset Splits | Yes | C NMC has a total of 12527 images. Among them, 8258 images are used for the training, 2132 for validation, and 1867 for the final test. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or specific computer specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using the "Adam optimizer" but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers, which are critical for reproducibility. |
| Experiment Setup | Yes | All models were trained on the C NMC dataset for 20 epochs, using Adam optimizer with a batch size of 16 to update network parameters. The initial learning rate is 0.001 and decay by 0.1 every ℓ2 steps. The pixel-level perturbations were obtained using an iterative PGD algorithm with a perturbation strength set to 6/225, an iteration number of 6, and a step size of 1/255. The spatial-transformed perturbations were implemented using the STBO algorithm, with the rotational and translational perturbations are [−180◦, 180◦] and [−12px, +12px]. The number of hyperparameter searches is 100. |