Incorporating Sum Constraints into Multitask Gaussian Processes

Authors: Philipp Pilar, Carl Jidling, Thomas B. Schön, Niklas Wahlström

TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate our method at the hand of two simulation experiments and one real data experiment 1. They have in common that the constraints involved are constant (see Section 3.2.1); for examples of the non-constant case, see Sections A.2 and A.3 in the Supplementary material. [...] In Table 1, values for both the root mean squared error (RMSE) and the average absolute violation of the constraint | C| are given for various noise levels σn, both with complete and incomplete measurements;
Researcher Affiliation Academia Philipp Pilar EMAIL Department of Information Technology Uppsala University Carl Jidling EMAIL Department of Information Technology Uppsala University Thomas B. Schön EMAIL Department of Information Technology Uppsala University Niklas Wahlström EMAIL Department of Information Technology Uppsala University
Pseudocode Yes Algorithm 1 The Constrained GP: High-level Procedure [...] Algorithm 2 Constraining the GP (Section B.1 refers to the Supplementary material) [...] Algorithm 3 Constraining the GP Special Case of Constant Task Interdependencies
Open Source Code Yes 1The code used for the experiments is available at https://github.com/ppilar/Sum Constraint.
Open Datasets Yes In this section we consider the Double Pendulum Chaotic dataset (Asseman et al., 2018); this dataset consists of 21 different two dimensional trajectories of a double pendulum and contains annotated positions of the masses attached at the ends of the two pendula.
Dataset Splits Yes We pick a sequence of 200 data points (which are fairly close together) from one of the trajectories; 15 of these points are used during hyperparameter optimization, and to receive an estimate ˆE of the energy. The remaining 185 points are used as test data to compare the performance of constrained and unconstrained GP, both in terms of constraint fulfillment and in terms of RMSE with respect to the data.
Hardware Specification Yes All experiments have been conducted on a system with NVIDIA GTX 1060, 6GB GPU, Intel Core i7 7700-K @ 4.2GHz CPU and 16GB RAM.
Software Dependencies No All the models have been implemented in python with the library gpytorch (Gardner et al., 2018). The paper mentions the programming language python and the library gpytorch but does not provide specific version numbers for either.
Experiment Setup Yes The models have been trained using the Adam optimizer provided by gpytorch. For each experiment, the corresponding learning rate (lr), number of iterations (iter) and (if applicable) scheduler settings are given in Table 7. The scheduler multiplies the learning rate with s-factor after s-steps iterations. [...] Table 7: Training parameters for the experiments. lr iter s-steps s-factor HO (Sec. 4.1) 0.1 200 100 0.5 Triangle (Sec. 4.2) 0.1 2000 800 0.2 DP (Sec. 4.3) 0.1 2000 800/500 0.2/0.5 Free fall (A.1) 0.1 200 100 0.5 Damped HO (A.2) 0.1 200 100 0.5 Non-square (A.3) 0.1 200 100 0.5