JoIN: Joint GANs Inversion for Intrinsic Image Decomposition

Authors: Viraj Shah, Svetlana Lazebnik, Julien Philip

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the success of our approach through exhaustive qualitative and quantitative evaluations and ablation studies on various datasets.
Researcher Affiliation Collaboration Viraj Shah EMAIL UIUC Svetlana Lazebnik EMAIL UIUC Julien Philip EMAIL Adobe Research
Pseudocode No The paper describes methods using mathematical equations and descriptive text, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code No We plan to release our code in future.
Open Datasets Yes The Materials dataset is composed of highly realistic renderings of synthetic material tiles as done in Deschaintre et al. (2018). Faces are notoriously adapted to the usage of GANs and to cater to such a use case, we use the synthetic Lumos dataset (Yeh et al., 2022) to train our GANs ...for evaluating our model... from FFHQ dataset Karras et al. (2018; 2019a). Here we show experiments on Hypersim dataset Roberts et al. (2021) of indoor scenes
Dataset Splits Yes The Primeshapes dataset... contains 100, 000 rendered images... out of which 70, 000 images are used to train the Style GAN generators and p Sp encoders for individual components... The remaining 30, 000 images are used as unseen testing images. The Materials dataset... Out of the full dataset, 1, 000 images are left out to serve as a test set.
Hardware Specification Yes On a single NVIDIA A40, Jo IN takes 190 seconds for inference on a single image
Software Dependencies Yes We use the blender python package Blender Online Community (2023) to generate both the Primeshapes and Materials synthetic datasets.
Experiment Setup Yes Our optimization-based inversion method remains the same for all the datasets with a learning rate of 0.1, k NN loss weight of 0.0001, and k 50. We run the optimization for 1000 steps.