V-Matrix Method of Solving Statistical Inference Problems

Authors: Vladimir Vapnik, Rauf Izmailov

JMLR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 9.3 Experimental Comparison of I-Matrix (L2 SVM) and V -matrix Methods In this section, we compare the L2-SVM based method with V -matrix based method for estimation of one-dimensional conditional probability functions. ... Figure 1 and Figure 2 present the result of approximation of conditional probability function for training sets of different sizes (48, 96, 192, 384) using the best γ for I-matrix method (left column) and V -matrix method (right column).
Researcher Affiliation Collaboration Vladimir Vapnik EMAIL Columbia University New York, NY 10027, USA Facebook AI Research New York, NY 10017, USA Rauf Izmailov EMAIL Applied Communication Sciences Basking Ridge, NJ 07920-2021, USA
Pseudocode Yes Appendix A. Appendix: V -Matrix for Statistical Inference In this section, we describe some details of statistical inference algorithms using V -matrix. First, consider algorithms for conditional probability function P(y|x) estimation and regression function f(x) estimation given iid data... A.1 Algorithms for Conditional Probability and Regression Estimation Step 1. Find the domain of function. Consider vectors...
Open Source Code No No explicit statement about code availability or repository link found in the paper.
Open Datasets No For each problem, we generated 10,000 test examples and selected the best the possible (for the given training set) value of parameter γ. Figure 1 and Figure 2 present the result of approximation of conditional probability function for training sets of different sizes (48, 96, 192, 384) using the best γ for I-matrix method (left column) and V -matrix method (right column). In all our experiments we used the equal number of representatives of both classes.
Dataset Splits Yes For each problem, we generated 10,000 test examples and selected the best the possible (for the given training set) value of parameter γ. Figure 1 and Figure 2 present the result of approximation of conditional probability function for training sets of different sizes (48, 96, 192, 384) using the best γ for I-matrix method (left column) and V -matrix method (right column). In all our experiments we used the equal number of representatives of both classes.
Hardware Specification No No specific hardware details (like GPU/CPU models or memory) are mentioned in the paper.
Software Dependencies No Efficient computational implementations for both L0 and L2 norms are available in the popular scientific software package Matlab. (No version number provided, and no other specific software dependencies are listed with versions.)
Experiment Setup Yes For each problem, we generated 10,000 test examples and selected the best the possible (for the given training set) value of parameter γ. ... In our experiments, we use the same kernel, namely, INK-spline of order 0: K(xi, xj) = min(x, xi). ... Subsequently, we compared V -matrix and I-matrix methods when the parameter γ is selected using the cross-validation technique on training data (6-fold cross validation based on maximum likelihood criterion): Figure 3 and Figure 4. ... In all our experiments we used the equal number of representatives of both classes.