Multiobjective distribution matching
Authors: Xiaoyuan Zhang, Peijie Li, Yingying Yu, Yichi Zhang, Han Zhao, Qingfu Zhang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on real-world images demonstrate that both algorithms can generate high-quality interpolated images across multiple distributions. |
| Researcher Affiliation | Academia | 1Department of Computer Science, City UHK. 2Department of Mathematics, HKU. 3Department of Statistics, IU. 4Department of Computer Science, UIUC. Correspondence to: Qingfu Zhang <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 MODM on the Preference Simplex. Algorithm 2 Multiobjective VAE (MOVAE) |
| Open Source Code | No | The paper does not provide concrete access to source code. It describes algorithms and experimental results but does not include a repository link or an explicit statement about code release for the methodology described. |
| Open Datasets | Yes | We evaluate MOVAE on the Quick Draw dataset |
| Dataset Splits | No | The paper mentions "Number of training images is around 12K" but does not specify any splits for validation, test sets, or a detailed splitting methodology. |
| Hardware Specification | No | The paper does not provide specific hardware details. It mentions image size, network parameters, and optimizer, but no GPU/CPU models or other computer specifications used for experiments. |
| Software Dependencies | No | The paper mentions "The optimizer is Adam with a learning rate of 3e-5." but does not provide any specific software versions for libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | Both the encoder and decoder networks have around 157K parameters. Number of training images is around 12K. The optimizer is Adam with a learning rate of 3e-5. |