HEP-NAS: Towards Efficient Few-shot Neural Architecture Search via Hierarchical Edge Partitioning

Authors: Jianfeng Li, Jiawen Zhang, Feng Wang, Lianbo Ma

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments and Analysis We evaluate the performance of HEP-NAS on the DARTS search space with CIFAR-10, CIFAR-100, and Image Net for image classifcation, as well as the NAS-Bench-201 search space with CIFAR-10, CIFAR-100, and Image Net16-120. All experiments are conducted on single NVIDIA RTX 4090 GPU and the results are obtained in 4 independent runs. Experimental results demonstrate that HEP-NAS outperforms the state-of-the-art algorithm in most cases.
Researcher Affiliation Academia 1School of Computer Science, Wuhan University 2College of Software, Northeastern University EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Main process of HEP-NAS
Open Source Code Yes Code https://github.com/Jianf-l/hepnas
Open Datasets Yes We evaluate the performance of HEP-NAS on the DARTS search space with CIFAR-10, CIFAR-100, and Image Net for image classifcation, as well as the NAS-Bench-201 search space with CIFAR-10, CIFAR-100, and Image Net16-120.
Dataset Splits Yes Algorithm 1: Main process of HEP-NAS ... 11: Select sub-supernet M with the highest accuracy on validation dataset using Eq. 5; ... Results on DARTS Search Space Settings DARTS search space is widely used for gradient-based NAS algorithm ... We train the supernet for 45 epochs with batch size 128 and split the supernet at 15, 25, 35, and 45 epochs respectively. ... Results on NAS-Bench-201 Search Space Settings NAS-Bench-201 (Dong and Yang 2020) is another widely used NAS benchmark ... We only search architectures on CIFAR-10 and CIFAR-100 , then transfer to Image Net16-120.
Hardware Specification Yes All experiments are conducted on single NVIDIA RTX 4090 GPU
Software Dependencies No No specific software dependencies with version numbers are mentioned in the provided text.
Experiment Setup Yes Settings DARTS search space is widely used for gradient-based NAS algorithm, which contains 8 candidate operations. All the corresponding hyper-parameter settings are kept the same as DARTS. We train the supernet for 45 epochs with batch size 128 and split the supernet at 15, 25, 35, and 45 epochs respectively. we set warmup epo to 5 initially and decrease it sequentially when splitting the next hierarchy, as the sub-supernets have already been sufficiently trained.