Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Minimax Optimal Deep Neural Network Classifiers Under Smooth Decision Boundary

Authors: Tianyang Hu, Ruiqi Liu, Zuofeng Shang, Guang Cheng

JMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments are conducted on simulated data to corroborate our theoretical results. In this section, we corroborate our theory with numerical experiments on 2-dimensional synthetic data.
Researcher Affiliation Academia Tianyang Hu EMAIL The Chinese University of Hong Kong, Shenzhen, China Ruiqi Liu EMAIL Texas Tech University, Lubbock, TX Zuofeng Shang EMAIL New Jersey Institute of Technology, Newark, NJ Guang Cheng EMAIL University of California, Los Angeles, Los Angeles, CA
Pseudocode No The paper describes methods in mathematical terms and prose without presenting any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the methodology described, nor does it provide any links to a code repository.
Open Datasets No Numerical experiments are conducted on simulated data to corroborate our theoretical results. The paper describes a process for generating synthetic data but does not provide access information or state its public availability.
Dataset Splits Yes Sample size is chosen to be 1000 for the experiments in Figure 5(a). The neural network training is done by stochastic gradient descent (initial learning rate=0.1, momentum=0.9, weight decay=0.001). The batch size is chosen to be 100. The total iteration number is 10000, with learning rate decayed by 1/10 every 2000 steps. To make a fair comparison, we fix the random seed for the data generating process. The randomness comes from network initialization and batch selection. Test accuracy is evaluated by sampling one million test data points.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, processors, memory) used to run the experiments.
Software Dependencies No All experiments are conducted using Py Torch." The paper mentions PyTorch but does not provide a version number, nor does it list any other software dependencies with specific version numbers.
Experiment Setup Yes The neural network training is done by stochastic gradient descent (initial learning rate=0.1, momentum=0.9, weight decay=0.001). The batch size is chosen to be 100. The total iteration number is 10000, with learning rate decayed by 1/10 every 2000 steps. To be more specific, we choose efn to be a 3-layer Re LU network with width 250 and bfn to be the composition of M = 5 local Re LU classifiers, each with depth 3 and width 100.