TexTailor: Customized Text-aligned Texturing via Effective Resampling

Authors: Suin Lee, DAE SHIK KIM

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a subset of the Objaverse dataset and the Shape Net car dataset demonstrate that Tex Tailor outperforms state-of-the-art methods in synthesizing view-consistent textures. The source code for Tex Tailor is available at https://github.com/Adios42/Textailor
Researcher Affiliation Academia Suin Lee, Dae-Shik Kim KAIST Daejeon, South Korea EMAIL
Pseudocode No The paper describes the methodology using mathematical formulations and textual descriptions of the steps (e.g., in sections 2.1, 2.2, 3.1, 3.2, 3.3) but does not include a dedicated pseudocode or algorithm block.
Open Source Code Yes The source code for Tex Tailor is available at https://github.com/Adios42/Textailor
Open Datasets Yes Experiments on a subset of the Objaverse dataset (Deitke et al., 2022) and the Shape Net car dataset (Chang et al., 2015) demonstrate the superior performance of Tex Tailor...
Dataset Splits No We select a subset of the Objaverse dataset (Deitke et al., 2022) to evaluate our model, following the approach of Chen et al. (2023a). In this dataset, Chen et al. (2023a) filter out low-quality or misaligned meshes from the designated categories, resulting in 410 textured meshes across 225 categories for our experiments. Notably, the original textures are used exclusively for evaluation. The paper describes selecting a subset for evaluation and fine-tuning with rendered images from specific viewpoints, but it does not provide explicit training/validation/test splits for the datasets.
Hardware Specification Yes Additionally, Tex Tailor s fine-tuning process is time-intensive, requiring approximately an hour and a half per 3D mesh on an NVIDIA TITAN RTX.
Software Dependencies No For rendering and texture projection, we utilize the Py Torch framework (Paszke et al., 2017) along with Py Torch3D (Ravi et al., 2020). This mentions software but does not provide specific version numbers.
Experiment Setup Yes For both datasets, the number of resampling steps, as well as the parameters λ, β, and γ, are set to 3, 2.5, 0.5, and 0.5, respectively. We finetune the Control Net using five images rendered from viewpoints close to and including the first viewpoint: v1 = (0 , 15 , 1), v2 = (0 , 35 , 1), v3 = (0 , 5 , 1), v4 = (20 , 15 , 1), and v5 = (340 , 15 , 1).