Transferable Relativistic Predictor: Mitigating Cross-Task Cold-Start Issue in NAS
Authors: Nan Li, Bing Xue, Lianbo Ma, Mengjie Zhang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results in different search spaces show the superior performance of TRP compared with state-of-the-art predictors. TRP requires only 54 and 73 evaluated architectures for a warm start on the CIFAR-10 and CIFAR-100 under the DARTS search space. |
| Researcher Affiliation | Academia | Nan Li1,2 , Bing Xue2,3 , Lianbo Ma 1 and Mengjie Zhang2,3 1College of Software, Northeastern University 2Centre for Data Science and Artificial Intelligence 3School of Engineering and Computer Science, Victoria University of Wellington EMAIL, EMAIL,EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical formulations and descriptive text, but it does not contain a distinct pseudocode block or algorithm section. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing its own source code or a link to a code repository. It mentions 'implemented by ourselves using open source code' in Table 2 and Table 3, but this refers to the baselines/competitors, not the authors' own implementation of TRP. |
| Open Datasets | Yes | Extensive experimental results in different search spaces show the superior performance of TRP compared with state-of-the-art predictors. TRP requires only 54 and 73 evaluated architectures for a warm start on the CIFAR-10 and CIFAR-100 under the DARTS search space. ... The experiments are conducted on NAS-Bench-201 and Trans NAS-Bench-101 benchmarks. ... The proxy dataset is constructed using zero-shot metrics on CIFAR-10 for pretraining, followed by finetuning on CIFAR-10, CIFAR-100, and Image Net16-120. For Trans NAS-Bench-101, zero-shot metrics from the objective classification task are used to construct the proxy dataset, and finetuning is performed on all datasets. |
| Dataset Splits | Yes | In NAS-Bench-201, the proxy dataset is constructed using zero-shot metrics on CIFAR-10 for pretraining, followed by finetuning on CIFAR-10, CIFAR-100, and Image Net16-120. For Trans NAS-Bench-101, zero-shot metrics from the objective classification task are used to construct the proxy dataset, and finetuning is performed on all datasets. ... Following [White et al., 2021a], we randomly select a certain number of architectures in the DARTS search space and then train them from scratch to construct the evaluated architectures for the pretraining predictor, where the number of selected architectures is determined by the proposed adaptive approach. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the experiments. |
| Experiment Setup | Yes | For each selected architecture, the epoch number is set to 50 with a batch size 64. |