A Multiscale Frequency Domain Causal Framework for Enhanced Pathological Analysis
Authors: Xiaoyu Cui, Weixing Chen, Jiandong Su
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on Camelyon16 and TCGANSCLC dataset show that, compared to previous work, our method has significantly improved accuracy and generalization ability, providing a new theoretical perspective for medical image analysis and potentially advancing the field further. 4 EXPERIMENT |
| Researcher Affiliation | Academia | 1Northeastern University 2 Sun Yat-sen University 3Shenzhen Institute of Advanced Technology EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and modules (CMIM, MSRM, FSRM) but does not present them in a structured pseudocode or algorithm block format. |
| Open Source Code | Yes | The code will be released at https://github.com/Wissing Chen/ MFC-MIL. |
| Open Datasets | Yes | The Camelyon16 dataset Bejnordi et al. (2017) is widely used for detecting breast cancer metastases. ... Meanwhile, the TCGA-NSCLC dataset focuses on two lung cancer subtypes, LUSC and LUAD, with 1,054 whole slide images. |
| Dataset Splits | Yes | The Camelyon16 dataset... 270 training and 129 testing images... Meanwhile, the TCGA-NSCLC dataset... It is divided into training, validation, and test sets in a 7:1:2 ratio... To evaluate the effectiveness of our approach, we apply four key metrics for classification performance: accuracy, F1 score, specificity, and the area under the receiver operating characteristic curve (AUC). These metrics provide a comprehensive assessment of the method s overall performance... using 5-fold cross-validation. |
| Hardware Specification | Yes | All experiments were conducted on an NVIDIA Ge Force RTX 2080Ti. |
| Software Dependencies | No | In the feature extraction process, we employed a CNN-based Res Net18, with parameters pre-trained using Sim CLR as part of the DSMIL framework. ... we used the Adam optimizer with an initial learning rate of 2e-4 and a weight decay of 5e-4. |
| Experiment Setup | Yes | The model operates with a dimension of 512, while the value of k in CMIM is set to 16 for high-resolution features and 32 for low-resolution features. For most experiments, we used the Adam optimizer with an initial learning rate of 2e-4 and a weight decay of 5e-4. Additionally, our MFC estimates the mediator using patch-level features and applies it to intervene in the aggregated bag-level prediction vector. The mini-batch size used for training is 1, and the model is trained for 100 epochs. |