Inductive Gradient Adjustment for Spectral Bias in Implicit Neural Representations
Authors: Kexuan Shi, Hai Chen, Leheng Zhang, Shuhang Gu
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretical and empirical analyses validate impacts of IGA on spectral bias. Further, we evaluate our method on different INRs tasks with various INR architectures and compare to existing training techniques. The superior and consistent improvements clearly validate the advantage of our IGA. [...] We present comprehensive experimental analyses on synthetic data and a range of implicit neural representation tasks. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China 2Sun Yat-sen University, Guangzhou, China 3North China Institute of Computer Systems Engineering. Correspondence to: Shuhang Gu <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Inductive Gradient Adjustment (IGA) Input: Mini-batch Xt = {(xi, yi)}N i=1; model f( , Θt) Output: Updated parameters Θt+1 // Sample a subset Sample Xe Xt based on the strategy in Sec. 3.4 // Compute gradients and empirical kernel Compute Ge = Θtf(Xe; Θt) and then Ke = G e Ge // Construct the transformation matrix Compute Se based on Ke using the method in Sec. 3.4 // Generalize the adjustment to the full mini-batch Adjusted gradient: gt = Pp i=1 Θtf(Xi, Θt) Seri t; update the parameters by: Θt+1 Θt η gt return Θt+1 |
| Open Source Code | Yes | The codes are available at: https://github.com/Lab Shu Hang GU/IGA-INR. |
| Open Datasets | Yes | We test on the first 8 images from the Kodak 24 (Franzen, 1999), each containing 768 512 pixels. [...] using five 3D objects from the public dataset (Martel et al., 2021; Zhu et al., 2024) [...] on the downscaled Blender dataset (Mildenhall et al., 2021). |
| Dataset Splits | No | The paper evaluates its method on individual instances (e.g., 8 images from Kodak 24, five 3D objects, Blender dataset scenes) but does not provide specific train/test/validation splits or percentages for these datasets. For example, in 5.1 it refers to training on images directly without specifying a split. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU/GPU models, memory specifications, or cloud computing resources used for running its experiments. |
| Software Dependencies | No | The Ne RF-Py Torch codebase (Yen-Chen, 2020) is used, but specific version numbers for PyTorch or other libraries are not provided. |
| Experiment Setup | Yes | For Re LU, we halve the frequency of each component due to its limited representation capacity (Y uce et al., 2022). For IGA, we set p to 1, 4 and 8 and vary end from 1 to 7 to construct the corresponding Se. When p = 1, IGA degenerates to K-based adjustment. We adopt a four-hidden-layer, 256-width architecture for Re LU and SIREN, optimized by Adam for 10K iterations with a fixed learning rate. All models are repeated by 10 random seeds. [...] In Appendix G: "maintains a fixed rate for the first 3K iterations and then reduces it by 0.1 for another 7K iterations. For IGA, we set initial learning rates as 5e 3 for Re LU activation and 1e 3 for Sine activation. For all baseline models, we set initial learning rates as 1e 3 due to poor performance observed with 5e 3. Full-batch training is adopted." |