Flow Factorization for Efficient Generative Flow Networks

Authors: Jiashun Liu, Chunhui Li, Cheng-Hao Liu, Dianbo Liu, Qingpeng Cai, Ling Pan

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on various standard benchmarks, and results show that BN significantly improves learning efficiency and effectiveness compared to state-of-the-art baselines. [...] In this section, we conduct extensive experiments to understand the effectiveness of our method and investigate the following key questions: i) How does BN s performance compare to state-of-the-art baselines? ii) To what extent does BN improve learning efficiency and promote diversity in solution generation? iii) How effectively does BN scale to tackle larger-scale and more complex tasks?
Researcher Affiliation Collaboration 1Hong Kong University of Science and Technology 2Mila-Québec AI Institute, Mc Gill University 3National University of Singapore 4Kuaishou Technology
Pseudocode No The paper describes the method using mathematical formulations and a network structure diagram (Figure 4), but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper states, "The implementation of all baseline methods is based on the publicly available open-source code following default hyperparameters as used in (Bengio et al. 2021; Malkin et al. 2022; Madan et al. 2023)." This refers to the baselines' code, not explicitly to the authors' own implementation code for the proposed BN method. There is no direct link or clear statement about the availability of the code for BN in the provided text.
Open Datasets Yes We conduct extensive experiments on standard evaluation benchmarks in the GFlow Nets literature, including Hyper Grid (Bengio et al. 2021), RNA sequence generation (Kim et al. 2023), and molecule generation (Bengio et al. 2021).
Dataset Splits No For the Hyper Grid task: "the state space is relatively small, so the true reward distribution can be directly calculated since it allows for enumerating all possible states." For RNA and molecule generation, the paper references previous work (Kim et al. 2023, Bengio et al. 2021) and appendices (Appendix B.3, B.4) for experimental setup details, but specific dataset split information is not provided in the main text.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions that baseline implementations use 'publicly available open-source code following default hyperparameters', but it does not specify any particular software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) for the experimental setup.
Experiment Setup No Please refer to Appendix B.2 for a detailed description of the reward function R(x) and hyperparameter settings due to space limitation. [...] Details for the experimental setup can be found in Appendix B.3. [...] Please refer to Appendix B.4 for further details of experimental setup. The main text defers hyperparameter and detailed experimental setup information to the appendices, which are not provided.