Deep Nonparametric Quantile Regression under Covariate Shift

Authors: Xingdong Feng, Xin He, Yuling Jiao, Lican Kang, Caixing Wang

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments are conducted to further validate the theoretical findings and demonstrate the effectiveness of our proposed method. Numerical experiments on synthetic examples are provided in Section 5.
Researcher Affiliation Academia Xingdong Feng EMAIL School of Statistics and Data Science & Institute of Data Science and Statistics Shanghai University of Finance and Economics Shanghai, China Xin He EMAIL School of Statistics and Data Science Shanghai University of Finance and Economics Shanghai, China Yuling Jiao EMAIL School of Artificial Intelligence Hubei Key Laboratory of Computational Science Wuhan University Wuhan, China Lican Kang EMAIL Institute for Math and AI Wuhan University Wuhan, China Caixing Wang EMAIL School of Statistics and Data Science Shanghai University of Finance and Economics Shanghai, China
Pseudocode Yes Algorithm 1 The two-step pre-training deep nonparametric quantile regression algorithm
Open Source Code No The paper does not provide an explicit statement of code release, a link to a repository, or mention code in supplementary materials for the methodology described.
Open Datasets No Numerical experiments on synthetic examples are provided in Section 5. ... We generate the data from the following univariate model Y = X6 + σε, where ε N(0, 1) and σ = 0.05. ... In this section, we consider the following additive multivariate model Y = sin(2πX1) + 0.5e X2 + 1.5|(X3 0.4)(X3 0.6)| + σX2ε, where ε t(3) and σ = 0.1. The data used for experiments are synthetically generated and not drawn from a publicly available dataset with a specific link or citation.
Dataset Splits Yes For each simulated scenario, we generate the training data {Xtr i , Y tr i }ntr i=1 with sample size ntr from the source distribution to train those three nonparametric quantile regression models at five quantile levels τ {0.05, 0.25, 0.5, 0.75, 0.95}. To evaluate each model, we generate the target data {Xta i , Y ta i }nta i=1 with sample size nta from the target distribution. For notation simplicity, we denote bfτ ntr and fτ 0 as the estimated and true quantile functions at the specific quantile level τ (0, 1), respectively. We evaluate the performance of these methods based on two norms between bfτ ntr and fτ 0 as given by ... To estimate the pre-training density ratio, we also independently generate extra training data {f X tr i , e Y tr i }m i=1 and target data {f X ta i , e Y ta i }m i=1 with the same sample size m. In our study, we fix nta = 10000 and m = 1000, and we report the averaged L1 and the square of L2 distances together with their corresponding standard errors over 100 independent repetitions under different scenarios.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No DQR: we implement it in Pytorch using the stochastic gradient descent (SGD) (Bottou, 2012) with Nesterov momentum of 0.9 and initial learning rate of 0.1 with rate decay 0.5. ... PWDQR: ...we solve (8) by a neural network using Pytorch... The optimization algorithm is Adam (Kingma and Ba, 2017) with a learning rate 10 4. The paper mentions Pytorch and specific optimizers (SGD, Adam) but does not provide version numbers for any software dependencies.
Experiment Setup Yes DQR: we implement it in Pytorch using the stochastic gradient descent (SGD) (Bottou, 2012) with Nesterov momentum of 0.9 and initial learning rate of 0.1 with rate decay 0.5. We consider the fixed width neural network consisting of Re LU activated multilayer perceptrons with three hidden layers. ... PWDQR: ...For the estimation of br S, we solve (8) by a neural network using Pytorch, which consists of Re LU activated multilayer perceptrons with two hidden layers. The optimization algorithm is Adam (Kingma and Ba, 2017) with a learning rate 10 4. ...we train those three nonparametric quantile regression models at five quantile levels τ {0.05, 0.25, 0.5, 0.75, 0.95}.