Learning Input Encodings for Kernel-Optimal Implicit Neural Representations
Authors: Zhemin Li, Liyuan Ma, Hongxia Wang, Yaoyun Zeng, Xiaolong Han
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate PEAK from three aspects: (1) verifying the kernel alignment properties of PEAK, (2) analyzing the impact of architectural choices, and (3) examining performance in both linear and nonlinear inverse problems, comparing with strong baseline methods, including vanilla MLP (Re LU activation), Fourier feature networks (Fourier) (Tancik et al., 2020), and Hash (M uller et al., 2022). |
| Researcher Affiliation | Academia | 1Department of Mathematics, National University of Defense Tecgnology, Changsha, China. Correspondence to: Hongxia Wang <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 PEAK Training Algorithm |
| Open Source Code | Yes | Code is available at https://github.com/lizhemin15/KAR. |
| Open Datasets | Yes | Experiments on image inpainting, phase retrieval, and Neural Radiance Field demonstrate the effectiveness of the proposed approach in improving the generalization of INRs. ... We conduct experiments on the Jetplane image... For the Baboon image... We compare PEAK with Instant-NGP (M uller et al., 2022) on Ne RF using 25, 50, and 100 input views from the Ne RF synthetic dataset. |
| Dataset Splits | Yes | We test three complex scenarios with missing patterns shown in Figure 4(a): Random (50% pixels randomly removed), Patch (structured regions missing), and Textural (complex patterns missing). ... We compare PEAK with Instant-NGP (M uller et al., 2022) on Ne RF using 25, 50, and 100 input views from the Ne RF synthetic dataset. |
| Hardware Specification | No | No specific hardware details are mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | We investigated the impact of regularization coefficient λ on model performance. We examined λ {10 5, 10 4, 10 3, 10 2, 10 1, 1, 10, 102}. ... We explored dimensions r {10, 50, 100, 200, 300, 400, 500}. ... We empirically used ρ = 1 and ϵ = 0.001, and took z T Rn as the final reconstruction result. |