Perm: A Parametric Representation for Multi-Style 3D Hair Modeling

Authors: Chengan He, Xin Sun, Zhixin Shu, Fujun Luan, Soeren Pirk, Jorge Alejandro Amador Herrera, Dominik L Michels, Tuanfeng Wang, Meng Zhang, Holly Rushmeier, Yi Zhou

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to validate the architecture design of PERM, and finally deploy the trained model as a generic prior to solve task-agnostic problems, further showcasing its flexibility and superiority in tasks such as single-view hair reconstruction, hairstyle editing, and hair-conditioned image generation.
Researcher Affiliation Collaboration 1Yale University 2Adobe Research 3Kiel University 4KAUST 5NJUST
Pseudocode No The paper describes mathematical formulations and architectural overviews but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions a project page ("More details can be found on our project page: https: //cs.yale.edu/homes/che/projects/perm/") and the release of a dataset ("we curated a dataset of 3D hair in a unified and parametric manner, which we released as well to facilitate future research."), but does not explicitly state that the source code for the methodology described in the paper is being released or provide a direct link to a code repository.
Open Datasets Yes We train PERM on an augmented version of USC-Hair Salon (Hu et al., 2015), which contains a total of 21, 054 data samples. For evaluation, we compiled a separate dataset of 3D hair models from various publicly available resources, including CT2Hair (Shen et al., 2023) (10 hairstyles), Structure Aware Hair (Luo et al., 2013) (3 hairstyles), and Cem Yuksel s website3 (4 hairstyles). ... Similar to AMASS (Mahmood et al., 2019), we curated a dataset of 3D hair in a unified and parametric manner, which we released as well to facilitate future research.
Dataset Splits No The paper states that it trains on an augmented version of USC-Hair Salon (21,054 samples) and evaluates on a separate dataset of 17 publicly available hair models. However, it does not specify any training/validation splits for the USC-Hair Salon dataset itself, nor percentages or counts for such splits if they exist.
Hardware Specification Yes We train our model and conduct all experiments on a desktop machine with an Intel Core i9-10850K CPU @ 3.60GHz, 64GB memory, and an NVIDIA RTX 3090 GPU.
Software Dependencies Yes Our code is implemented with Python 3.9.18, Py Torch 1.11.0, and CUDA Toolkit 11.3.
Experiment Setup Yes The Style GAN2 backbone has a learning rate of 0.002 for its generator and 0.001 for its discriminator... For both the U-Net and VAE, we set their learning rates to 0.002... We employ the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.1 and a cosine annealing schedule for the learning rate. For better convergence, we first optimize θ only for 1, 000 iterations as a warm-up to match the global shape, and then jointly optimize θ and β for 4, 000 iterations.