Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
ParZC: Parametric Zero-Cost Proxies for Efficient NAS
Authors: Peijie Dong, Lujun Li, Zhenheng Tang, Xiang Liu, Zimian Wei, Qiang Wang, Xiaowen Chu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on NASBench-101, 201, and NDS demonstrate the superiority of our proposed Par ZC compared to existing zero-shot NAS methods. |
| Researcher Affiliation | Academia | 1The Hong Kong University of Science and Technology (Guang Zhou) 2The Hong Kong University of Science and Technology 3National University of Defense Technology 4Harbin Institude of Technology, Shenzhen |
| Pseudocode | No | The paper describes the Par ZC framework and MABN but does not present them in a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide explicit statements or links indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | Datasets. We conduct experiments on various NAS benchmarks with extensive search space including NASBench-101 (NB101) (Ying et al. 2019), NAS-Bench-201 (NB201) (Dong and Yang 2020) and Network Design Spaces (NDS) (Radosavovic et al. 2019) with DARTS (Liu, Simonyan, and Yang 2019)/NASNet (Zoph and Le 2017)/ENAS (Pham et al. 2018a), spanning CIFAR-10 (Krizhevsky, Nair, and Hinton 2014), CIFAR-100 (Krizhevsky 2009), and Image Net16-120 datasets (Chrabaszcz, Loshchilov, and Hutter 2017). To verify the adaptability of Par ZC, we extend the experiment to Vi T search space, a.k.a. Autoformer (Chen et al. 2021a), on Image Net-1k. |
| Dataset Splits | Yes | Tab. 2 and 3 present the Kendall s Tau obtained from the same data splits, denoted as S#samples for the NB101 benchmark, utilizing 0.02% to 1% of the entire search space, and S #samples for the NB201 benchmark, utilizing 0.05% to 10% of the entire search space. We compare our Par ZC with one-shot (Guo et al. 2019; Chu, Zhang, and Xu 2021) and predictor-based NAS (Wen et al. 2020; Lu et al. 2023, 2021). |
| Hardware Specification | Yes | All of the experiments are conducted on RTX 4090Ti and Py Torch (Paszke et al. 2019) framework. |
| Software Dependencies | No | All of the experiments are conducted on RTX 4090Ti and Py Torch (Paszke et al. 2019) framework. The specific version number for PyTorch is not provided, only the citation year. |
| Experiment Setup | Yes | For NB101 and NB201, we utilize Adam optimizer with a learning rate 1e-4 and weight decay of 1e-3. The training batch size is 10, and the evaluation batch size is 50. The training epochs on NB101, NB201, and NDS are 150, 200, and 296, respectively. Specifically for NDS, we mainly conduct experiments on NASNet, DARTS, and ENAS search spaces to verify the ranking ability of Par ZC. Diff Kendall is a loss function when training Par ZC with α = 0.5. We detail the training settings in the Supp. A.3 for different search spaces. The hyperparameters of our proposed MABN, such as hidden size, dropout rate, and embedding dimension, are finely tuned using Bayesian optimization with Optuna (Akiba et al. 2019) (For more details, please refer to the Supp. A.4). |