Integrating Inference and Experimental Design for Contextual Behavioral Model Learning

Authors: Gongtao Zhou, Haoran Yu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate each method based on the quality of the collected behavioral dataset HT , measured by its effectiveness in training an accurate contextual behavioral model. Next, we introduce the settings of Xt and Dt. In each period t, a batch of 50 investors arrives. We generate each context vector x R16 by randomly sampling its elements from the uniform distribution U [ 1, 1]. ... We show the means and standard deviations of the NLL loss in Table 1, after running different methods for 15 periods under settings A E. Our three methods significantly outperform the baselines.
Researcher Affiliation Academia Gongtao Zhou, Haoran Yu School of Computer Science & Technology, Beijing Institute of Technology
Pseudocode Yes Algorithm 1: Inference-then-Design (ID) 1: Set p (θ) randomly, and H0 = . 2: for period t = 1 to T do 3: Minimize KL[q(θ|φ)||p(θ)] Eq(θ|φ)[log p(Ht 1|θ)] over φ to get q(θ|φ). 4: Observe investor context matrix Xt. 5: Choose dt Dt to maximize d EIG(dt), which is computed based on q(θ|φ), Xt, I, and J. 6: Observe investor choice vector yt under dt. 7: Ht Ht 1 {(Xt, dt, yt)}. 8: end for 9: return dataset HT .
Open Source Code Yes Our codes and generated data are available at: https://github.com/zhougongtao/IIDLP.
Open Datasets Yes Our codes and generated data are available at: https://github.com/zhougongtao/IIDLP.
Dataset Splits Yes We train a separate neural network using 80% of the data from HT for training and 20% for validation.
Hardware Specification Yes Each method is run with 20 random seeds simultaneously on a computer equipped with an Intel Core i7-12700KF (3.6 GHz) and 32 GB of RAM.
Software Dependencies No The paper does not explicitly list specific software dependencies (e.g., library names with version numbers like PyTorch 1.9 or TensorFlow 2.x).
Experiment Setup No The paper defines the loss functions and general procedures for inference and design, but does not explicitly provide concrete hyperparameter values such as learning rates, specific optimizers, batch sizes, or number of epochs for neural network training.