Dimension Reduction in Contextual Online Learning via Nonparametric Variable Selection

Authors: Wenhao Li, Ningyuan Chen, L. Jeff Hong

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct numerical experiments to validate the theoretical performances of BV-LASSO. We attempt to address three questions in practice: (1) Can the BV-LASSO algorithm successfully select relevant variables? (2) How does the BV-LASSO and Learning algorithm perform against existing algorithms without considering the sparsity structure? (3) How does BV-LASSO perform when f is a linear function of x? We first introduce the setups below.
Researcher Affiliation Academia Wenhao Li EMAIL College of Business, Shanghai University of Finance and Economics Shanghai 200433, China. Ningyuan Chen EMAIL Department of Management, University of Toronto Mississauga, ON L5L 1C6, Canada Rotman School of Management, University of Toronto, ON M5S 3E6, Canada. L. JeffHong EMAIL School of Management and School of Data Science, Fudan University, Shanghai 200433, China.
Pseudocode Yes Algorithm 1 BV-LASSO and Learning. Algorithm 2 Nested BV-LASSO and Learning. Algorithm 3 Linear BV-LASSO and Learning.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository. The license information (CC-BY 4.0) pertains to the paper itself, not the implementation code.
Open Datasets No At time t, the covariate Xt is independently sampled from a uniform distribution in [0, 1]3. The noise ϵt is generated from a Gaussian distribution N(0, σ2).
Dataset Splits No The paper uses synthetic data generated during the experiments and does not specify train/test/validation splits for a fixed dataset.
Hardware Specification Yes We perform BV-LASSO using a PC with 16 GB RAM and Inter Core i7-3770, 8 cores and 3.40 GHz.
Software Dependencies No The paper does not mention any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) for its experiments.
Experiment Setup Yes To implement the algorithm, we need to specify a set of hyperparameters: T, dx, n, h, λ, ξ. Among them, T and dx are known to the decision-maker; ξ can be set to 0.5 as the partial derivatives are non-vanishing in most area; n,h are chosen as in Theorem 6. We also set h = 1/ n1/(2dx+4) for the bin size. To determine the value of λ, the l1-penalty in localized LASSO, one is required to know L and µM as in Proposition 4. To avoid this scenario, we use a heuristic approach by noting that λ = Θ(h2) in Proposition 4. We set λ = cλh2 for some constant cλ. We vary cλ to better understand the sensitivity of the algorithm s performance to the choice.