Time Series Supplier Allocation via Deep Black-Litterman Model

Authors: Xinke Jiang, Wentao Zhang, Yuchen Fang, Xiaowei Gao, Hao Chen, Haoyu Zhang, Dingyi Zhuang, Jiayuan Luo

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four datasets demonstrate significant improvements of DBLM on TSSA. We evaluate DBLM on four supply chain datasets to optimize Time Series Supplier Allocation (TSSA), namely MCM and SZ, to ensure comprehensive validation.
Researcher Affiliation Academia 1 University of Electronic Science and Technology of China, Chengdu, China 2 Shanghai Tech University, Shanghai, China 3 University College London, London, United Kingdom 4 University of Chinese Academy of Sciences, Beijing, China 5 Zhongnan University of Economics and Law, Wuhan, China 6 Massachusetts Institute of Technology, Cambridge, USA 7 University of Macau, Macau, China EMAIL
Pseudocode Yes Training complexity and algorithm in Appendix A.6 A.7. Algorithm 1 DBLM Framework (found in Appendix A.7)
Open Source Code Yes 1Source codes and appendix are openly accessible at https://github.com/Qiu Fengqing/DBLM.
Open Datasets Yes We use the SLD dataset (Zhuang et al. 2022) (67 zones, 3-month duration, with 5-minute time intervals) and the BSS dataset (Gao, Chen, and Haworth 2023) (797 stations, 10,207,268 trips, and over 3 years of data with 15-minute time intervals) for the generalization experiment.
Dataset Splits Yes The dataset is divided into 70% training, 10% validation, and 20% testing portions.
Hardware Specification Yes Implementations are done using the Py Torch 1.9.0 in Python 3.8 on NVIDIA Tesla V100 GPU.
Software Dependencies Yes Implementations are done using the Py Torch 1.9.0 in Python 3.8 on NVIDIA Tesla V100 GPU.
Experiment Setup Yes To ensure reproducibility, we optimize the parameters of baseline models using the Adam Optimizer with L2 regularization and a dropout rate of 0.2. The sequence length for both input and forecasting is set to p = f = 4. For the DBLM model and baselines incorporating GNNs and TCNs, we utilize three layers with 150 hidden units each. For the DBLM model specifically, we set τ = 3, δ = 0.6, η = 1e 4, and κ = 2, with the number of attention heads at 3 and a soft rank regularization strength of 0.5. An early-stopping strategy with a patience of 10 epochs is employed to mitigate overfitting.