An Inverse Optimization Approach to Contextual Inverse Optimization

Authors: Yasunari Hikima, Naoyuki Kamiyama

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To investigate the effectiveness of the proposed method, we conducted numerical experiments on several optimization problems and compared the proposed method against the existing methods for a predict-then-optimize framework. In Figure 2, we show the history of an average of relative regret in Eq. (13) for each method to the {0, 1}-knapsack problem with a polynomial degree deg {2, 4} (Figures (2a,2b)) and the diverse bipartite matching problem with deg {2, 4} (Figures (2c, 2d)), respectively. Therefore, we summarized the computational time per epoch in seconds for each method in Table 1. Table 2 presents the mean and standard deviation of the relative regret, and the computational time per epoch for the proposed method with M set to 1, 2, 5, and 10.
Researcher Affiliation Collaboration Yasunari Hikima1,2 , Naoyuki Kamiyama1 1Kyushu University, 2Fujitsu Limited EMAIL, EMAIL
Pseudocode Yes We present the pseudocode of the proposed algorithm to solve the contextual inverse optimization in Algorithm 1. We present the pseudo-code of the cutting plane algorithm in 2.
Open Source Code No The text is ambiguous or lacks a clear, affirmative statement of release. The paper mentions using the 'pyepo package' for implementing existing methods, but does not provide any specific link or statement regarding the release of their own source code for the proposed methodology.
Open Datasets No The paper describes how synthetic data was generated for the experiments (Section 4.2 'Synthetic Data Generation'), but it does not provide any specific link, DOI, repository, or citation for accessing the generated datasets themselves. While it references 'CORA citation network dataset [Sen et al., 2008]' for the original bipartite matching problem, the experiments use synthetically generated data, not the CORA dataset directly.
Dataset Splits Yes For all experiments, we set the number of training samples as N = 1000 and the number of test samples as N = 1000.
Hardware Specification Yes Experimental programs were implemented in Python 3.11 and run on a Windows 10 PC with an Intel(R) Xeon(R) W1270 CPU 3.40 GHz and 64 GB RAM.
Software Dependencies Yes Experimental programs were implemented in Python 3.11 and run on a Windows 10 PC with an Intel(R) Xeon(R) W1270 CPU 3.40 GHz and 64 GB RAM. For both the existing and proposed methods, we used the Gurobi Optimizer 11.0.2 [Gurobi Optimization, LLC, 2024] to solve the optimization problems.
Experiment Setup Yes For existing methods, we employed Adam optimizer [Kingma and Ba, 2014] with a learning rate η = 10 2 to minimize each loss function. For the learning to rank methods, we employed pairwise loss and listwise loss, and we set the ratio of calling the optimization oracle as 0.5. For the perturbed optimizer approach, we set both the number of perturbations and the amplitude of the perturbation as 1. For the proposed method (Algorithm 1), we set T = 100, M = 1, σ = 1, and we solve the inverse optimization problem by the optimization problem given in Eq. (10) for a {0, 1}-knapsack problem and the cutting-plane algorithm (Algorithm 2) for a diverse bipartite matching problem. For both methods, we use the linear model, f(z; Θ) = zΘ, to predict a coefficient vector.