Learning Discretized Neural Networks under Ricci Flow
Authors: Jun Chen, Hanwen Chen, Mengmeng Wang, Guang Dai, Ivor W. Tsang, Yong Liu
JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results across various datasets demonstrate that our method achieves superior and more stable performance for DNNs compared to other representative training-based methods. 7. Experiments In this section, we conduct ablation studies to compare our RF-DNN11 trained from scratch with other STE methods. Additionally, when evaluating the performance of the RF-DNN with a pre-trained model, we compare it with several representative training-based methods on classification benchmark datasets. All experiments are implemented in Python using Py Torch (Paszke et al., 2019). |
| Researcher Affiliation | Collaboration | 1Institute of Cyber-Systems and Control, Zhejiang University, China 2School of Computer Science and Technology, Zhejiang Normal University, China 3SGIT AI Lab, State Grid Corporation of China, China 4Centre for Frontier Artificial Intelligence Research, Agency for Science, Technology and Research (A*STAR), Singapore 5Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 6College of Computing and Data Science, Nanyang Technological University, Singapore |
| Pseudocode | Yes | Algorithm 1 An algorithm for training DNNs in the LNE manifold. We denote the gradient in the LNE manifold as . Algorithm 2 An algorithm for training our RF-DNNs in the LNE manifold. We introduce a parameter α to balance the regularization and ensure the existence of the solution for the discrete Ricci flow. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It only mentions general implementation details without a link or explicit statement of code release. |
| Open Datasets | Yes | CIFAR datasets: There are two CIFAR benchmarks (Krizhevsky et al., 2009), each consisting of natural color images with 32 x 32 pixels. Image Net dataset: The Image Net benchmark (Russakovsky et al., 2015) consists of 1.2 million high-resolution natural images, with a validation set containing 50k images. |
| Dataset Splits | Yes | Both [CIFAR] datasets comprise 50k training images, 10k test images, and a validation set of 5k images selected from the training set. The Image Net benchmark (Russakovsky et al., 2015) consists of 1.2 million high-resolution natural images, with a validation set containing 50k images. |
| Hardware Specification | Yes | The hardware environment includes an Intel(R) Xeon(R) Silver 4214 CPU(2.20 GHz), Ge Force GTX 2080Ti GPU, and 128GB RAM. |
| Software Dependencies | No | All experiments are implemented in Python using Py Torch (Paszke et al., 2019). The paper mentions the use of Python and PyTorch but does not specify their version numbers. |
| Experiment Setup | Yes | Batch normalization with a batch size of 128 is employed in the learning strategy, and Nesterov momentum of 0.9 (Dozat, 2016) is used in SGD optimization. For CIFAR, we set the total training epochs to 200 and a weight decay of 0.0005. The learning rate is reduced by a factor of 10 at epoch 80, 150, and 190, starting with an initial value of 0.1. For Image Net, we set the total training epochs to 100 and use a cosine annealing schedule for the learning rate of each parameter group with a weight decay of 0.0001. |