Isometric Regularization for Manifolds of Functional Data

Authors: Hyeongjun Heo, Seonghun Oh, JaeYong Lee, Young Min Kim, Yonghyeon Lee

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments, we validate the effectiveness of our isometric regularization for functional data on various tasks with INRs including neural Signed Distance Functions (SDFs) (Park et al., 2019), neural Bidirectional Reflectance Distribution Functions (BRDFs) (Fan et al., 2022), and Deep Operator Networks (DONets) (Lu et al., 2019) showing that our method is modalityindependent. Further, we illustrate that isometric regularization guides the model F to learn an accurate manifold with smooth latent space leading to good generalization performance and robustness to noise in data. 5 EXPERIMENTS In this section, we conduct extensive experiments to show the effect of isometric regularization on three data modalities: neural SDFs, neural BRDFs, and neural operators.
Researcher Affiliation Academia 1Seoul National University 2Yonsei University 3Chung-Ang University 4Korea Institute for Advanced Study EMAIL EMAIL EMAIL EMAIL
Pseudocode Yes Algorithm 1: Efficient approximation of Eq. (9) Precondition: input concatenation (F : Rn+m Rl) Input: latent codes {z0, ..., z N} & input coordinate samples {{x(0) 0 , ..., x(K) 0 }, ..., {x(0) N , ..., x(K) N }} Output: Relaxed distortion measure G 1 G1, G2 0 2 Augment z with the modified mix-up data-augmentation 3 forall zi in z do 4 xi {x(0) i , ..., x(K) i } 5 Sample vector vi N(0, Im m) 6 Expand vi by repeating K times 7 Augment vector vi by concatenating [ 0k n, vi] 8 Compute G = J(xi, zi)vi with Jacobian-vector product 9 G1 G1 + Ez[Ex[GT G]] 10 Compute D = GT F (xi, zi)/ (x, z) with vector-Jacobian product 11 Slice the index of D by taking the last m-th components 12 G2 G2 + Ez[Ex[D]T Ex[D]] 13 end 14 G G2/G1 15 return G
Open Source Code Yes Project website: heo0224.github.io/IRMF_projectpage
Open Datasets Yes MNIST (Deng, 2012) and Shape Net (Chang et al., 2015). ... We use the MERL dataset (Matusik et al., 2003), a common BRDF dataset measured from 100 real isotropic materials. ... This study focuses on two types of PDE datasets: the reaction-diffusion equation, as discussed in Yang et al. (2022), and the Darcy flow problem, based on Lu et al. (2022).
Dataset Splits Yes MNIST Dataset. We make two datasets with different numbers of training data N = 300, 1500. The datasets contain 100 & 500 images randomly chosen from each of three digits [6, 8, 9]. For test time optimization, we randomly sample 256 points from zero-level surfaces of the test dataset with 100 images from each digit [6, 8, 9]. Shape Net Dataset. We randomly choose 5% (N = 271) and 10% (N = 542) of shapes from the chair category of Shape Net V2 for training datasets. ... The test-time optimization reconstructs the full 3D shape from partial point clouds obtained by deleting the right half of the surface point cloud. ... MERL dataset... We split the dataset into 80 materials for training and 20 materials for the test dataset. We train the model with various numbers of training data: N = 20, 40, 60, 80. ... For the projection of input and output functions, the time domain t [0, 1] and spatial domain x [0, 1] are uniformly discretized into 100 points each. ... The input a(x, y) and the output u(x, y) are uniformly discretized to a resolution of 20 20.
Hardware Specification Yes We use a single NVIDIA Ge Force RTX 2080 Ti GPU for training.
Software Dependencies Yes We would like to thank Cl ement Jambon for his valuable help with Mitsuba 3 renderer.
Experiment Setup Yes We use Adam (Kingma, 2014) with a learning rate of 1e-4 for network parameters and 1e-3 for the latent codes. ... We use Adam with a learning rate of 1e-4 for network parameters and 1e-3 for the latent codes. ... We optimize the network parameters with Adam with a learning rate of 5e-3 for 2000 epochs. We adjust the learning rate by half per 500 epochs. ... We set λeikonal = 0.01. ... The learning rate is 5e-4 for the network parameters and 1e-4 for the latent codes. We trained the models for 200 epochs. We decreased the learning rates by half after 100 epoch. ... The learning rate is set to 1e-4. We trained the models for 200,000 epochs with a batch size of 1,000 for the reaction-diffusion datasets and for 55,000 epochs with a batch size of 200 for the Darcy flow datasets.