Fully Test-Time Adaptation for Feature Decrement in Tabular Data
Authors: Zi-Jian Cheng, Zi-Yi Jia, Kun-Yang Yu, Zhi Zhou, Lan-Zhe Guo
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate that our proposal significantly improves both performance and robustness in missing feature imputation and adaptation scenarios. Comprehensive experiments on 9 datasets demonstrate that proposed FTTA methods exhibit significant improvements in performance and robustness in feature decrements over 11 comparison models. |
| Researcher Affiliation | Academia | 1School of Intelligence Science and Technology, Nanjing University, China 2National Key Laboratory for Novel Software Technology, Nanjing University, China 3School of Artificial Intelligence, Nanjing University, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology for LLM-IMPUTE and ATLLM with textual descriptions and figures (Figure 3: a prompt template, Figure 4: an overview diagram), but it does not contain a clearly labeled pseudocode or algorithm block with structured steps. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository or mention code in supplementary materials. |
| Open Datasets | Yes | To effectively simulate feature-decrement scenarios in tabular data, we select a variety of open-source and reliable datasets from Open ML and Kaggle s extensive dataset library. These datasets encompass three primary tasks: binary classification, multi-class classification, and regression, and span a range of fields such as finance and healthcare. A summary of the key attributes of the datasets is provided in Appendix A. |
| Dataset Splits | No | The paper mentions using 'training data', 'testing data', and 'validation' in a general sense but does not provide specific details on how the datasets were split, such as percentages, sample counts, or the methodology used for partitioning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions 'Llama3-8B, a model released by Meta AI in April 2024' as a specific LLM used. However, it does not provide a reproducible description of ancillary software, such as programming languages, libraries, or other tools with their specific version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | During fine-tuning, the number of epochs is set to 30 to ensure that the model has ample opportunity to learn and converge. The learning rate is set to 1e 5 to prevent overfitting and enable the model to converge effectively. |