Generative Adversarial Ranking Nets
Authors: Yinghua Yao, Yuangang Pan, Jing Li, Ivor W. Tsang, Xin Yao
JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Meanwhile, numerous experiments show that GARNet can retrieve the distribution of user-desired data based on full/partial preferences in terms of various interested properties (i.e., discrete/continuous property, single/multiple properties). Code is available at https://github.com/Eva Flower/GARNet. Keywords: Generative Adversarial Network, Controllable Generation, User Preferences, Adversarial Ranking, Relativistic f-Divergence |
| Researcher Affiliation | Academia | Yinghua Yao EMAIL Center for Frontier AI Research, A*STAR, Singapore and Institute of High Performance Computing, A*STAR, Singapore Yuangang Pan EMAIL Center for Frontier AI Research, A*STAR, Singapore and Institute of High Performance Computing, A*STAR, Singapore Jing Li EMAIL Center for Frontier AI Research, A*STAR, Singapore and Institute of High Performance Computing, A*STAR, Singapore Ivor W. Tsang EMAIL Center for Frontier AI Research, A*STAR, Singapore and Institute of High Performance Computing, A*STAR, Singapore Xin Yao EMAIL Department of Computing and Decision Sciences, Lingnan University, Hong Kong |
| Pseudocode | Yes | Algorithm 1 Generative Adversarial Ranking Nets 1: Input: Training data X = {xn}N n=1, user preferences S = {sm}M m=1, batch size B, score vector π, ranker R and generator G. 2: Output: Generator G for user-preferred data distribution, i.e., Pg(x) = Pu(x). 4: Sample a mini-batch of preferences {si}B i=1 from S. 5: Get fake samples {xgi}B i=1 from the generator G, i.e., xgi = G(zi) where zi is a random noise. 6: Following Eq. (6a), construct target preferences {s(R) i }B i=1 for the ranker R. 7: Train the ranker R according to Eq. (8a). 8: Following Eq. (6b), construct target preferences {s(G) i }B i=1 for the generator G. 9: Train the generator G according to Eq. (8b). 10: until convergence |
| Open Source Code | Yes | Code is available at https://github.com/Eva Flower/GARNet. |
| Open Datasets | Yes | Dataset: (1) MNIST dataset (Lecun et al., 1998) consists of 28 28 images with digit zero to nine. We use its training set (50K images) for experiment. (2) Labeled Faces in the Wild (LFW) dataset (Huang et al., 2008) consists of 13, 143 celebrity face images from the wild. (3) UT-Zap50K dataset (Yu and Grauman, 2014) contains 50, 025 shoe images from Zappos.com. |
| Dataset Splits | Yes | On the MNIST dataset (Lecun et al., 1998), we randomly pick up 0.5% samples of the digit six to constitute the minority class. All samples of the rest classes are retained. |
| Hardware Specification | No | The paper does not provide specific hardware details for running its experiments. |
| Software Dependencies | No | The paper mentions architectures (WGANGP, DCGAN) and optimizers (Adam) but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | Yes | For the training we use the Adam optimizer (Kingma and Ba, 2015) with learning rate 2 10 4 and β1 = 0.5, β2 = 0.999. According to Proposition 5 and Corollary 7, we set the ground-truth score vector for s(R) as π s(R) = [10 + 5(l 1), 10 + 5(l 2), . . . , 10, 0] for all datasets, which can make GARNet learn a data distribution that is best consistent with user preferences as q1 1. We simply set the ground-truth score vector for s(G) as π s(G) = [10 + 5(l 2), 10 + 5(l 3), . . . , 10, 5, 0]. For MNIST, the batch size used is 50. The training iteration is set to 100K. For LFW and UT-Zap50K, the batch size is 64. The training iteration is 200K. The training images are resized to 32 32 unless specifically mentioned. |