Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Minimax optimal approaches to the label shift problem in non-parametric settings
Authors: Subha Maity, Yuekai Sun, Moulinath Banerjee
JMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present simulations that illustrate the effects of class imbalance in the source domain. The labels in the source domain are distributed as Yi Ber(πP ), and the labels in the target domain are distributed as Yi Ber(0.75). The class conditional distributions (in both source and target domains) are Xi | Yi Yi TN(0, 1, 2, 2) 3 + (1 Yi) TN (2, 1, 0, 4) 3 , where TN(µ, σ2, a, b) is the N(µ, σ2) distribution truncated to the interval [a, b]. We consider three class imbalance settings: πP = 0.5 (solid line, solid circle as pointer), πP 1/ n P (dashed line, star as pointer) and πP 1/n P (dotted line, plus as pointer). We defer other details of the simulation setup to Appendix A. |
| Researcher Affiliation | Academia | Subha Maity EMAIL Yuekai Sun EMAIL Moulinath Banerjee EMAIL Department of Statistics University of Michigan Ann Arbor, MI |
| Pseudocode | No | The paper describes methods and constructions but does not contain explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The codes and simple demonstrations are provided in https://github.com/smaityumich/labelshift. |
| Open Datasets | No | We start by describing the data generating process D(n, π). Let µX denote the probability distribution of random variable X. For a < b we define the TN(µ, σ2, a, b) as the N(µ, σ2) distribution truncated to the interval [a, b]. Given inputs sample size n and class probability π for class 1, D(n, π) returns a pair (x, y) where, y is a n dimensional random vector with IID Ber(1, π) components. x = [x1, . . . , xn]T is a n 3 random matrix with independent rows. The distribution of the i-th row is xi | yi yi µ 3 TN(0,1, 2,2) + (1 yi) µ 3 TN(2,1,0,4). We observe that the features are supported on the hypercube [ 2, 4]4 |
| Dataset Splits | Yes | (x P , y P ) = D(n P , 0.5) is the data from source population. (x Q, y Q) = D(n Q, 0.75) is the data from target population. (xtest, ytest) = D(ntest, 0.75) is the data for evaluating the performance of the classifiers, which shall also be referred as test data. Note the distribution of test data is same as the target distribution. |
| Hardware Specification | No | The paper describes simulation details in Appendix A but does not mention any specific hardware used for running the experiments. |
| Software Dependencies | No | The paper describes using a 'β -valid kernel' and refers to various methods, but it does not specify any software libraries or tools with their version numbers. |
| Experiment Setup | Yes | To estimate the class conditional densities, we use a 2-valid kernel (see (3.4)) with the optimal bandwidth h0 = n 1/7 0 and h1 = n 1/7 1 (see Theorem 6)... In that regard, we fix the bandwidth parameter h0 = n 1/10 0 , h1 = n 1/10 1 ... We fix hQ = 1/10... To estimate the class conditional densities, we use a 3-valid kernel (see (3.4)) with the optimal bandwidths h0 = n 0 1/7, h1 = n 1 1/7. |