Unlabelled Compressive Sensing under Sparse Permutation and Prior Information

Authors: Garweet Sresth, Satish Mulleti, Ajit Rajwade

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Numerical Experiments In order to assess the impact made by knowledge of known correspondences, we compare Ar-Lasso from equation 6 and A-Htp from Alg. 1 to the following estimators, none of which use the prior information of known correspondences.: (i) The robust Lasso (R-Lasso) estimator given by arg min β Rp,e Rn 1 2N y Aβ ne 2 2 +λβ β 1 +λe e 1, which is effectively Ar-Lasso with m = 0 and N = n. (ii) The ℓ1-norm hard thresholding pursuit approach in Peng et al. (2021) that minimizes y Aβ 1 w.r.t. β Rp such that β 0 k. We refer to this approach as ℓ1-Htp. (iii) The ℓ1 ℓ1 estimator motivated from Candes & Tao (2005); Candes et al. (2005) that is posed as arg min β Rp y Aβ 1 + λβ β 1.
Researcher Affiliation Academia Garweet Sresth EMAIL Department of Electrical Engineering IIT Bombay Satish Mulleti EMAIL Department of Electrical Engineering IIT Bombay Ajit Rajwade EMAIL Department of Computer Science and Engineering IIT Bombay
Pseudocode Yes Algorithm 1 Augmented Hard-Thresholding Pursuit Input: Measurement vector y, augmented matrix H, sparsity level k and number of permutations s (both k and s can be estimated via cross-validation see Sec. 5 under Choice of parameters ) Parameter: Learning rate µ Output: Estimate of β
Open Source Code No The paper does not provide an explicit statement about releasing code or a link to a code repository for the methodology described.
Open Datasets No Data generation: In all the experiments, the entries of A and the non-zero values of β are sampled from N(0, 1). P2 is generated by randomly sampling from the family of s-sparse permutation matrices. The entries of w are independently sampled from N(0, σ2) where σ := fr the mean absolute value of the entries of the noiseless measurement vector P Aβ with fraction fr (0, 1). Experiment: Consider two grayscale images I and R from row 1 of Fig. 4 respectively. The image I was chosen arbitrarily as the reference image. Image R was generated synthetically by warping I using a displacement vector field that was sparse in the 2D-DCT basis (for both X and Y components of the motion) following the model in equation 19.
Dataset Splits No The paper describes a cross-validation strategy for parameter selection, splitting available measurements into 95% for reconstruction and 5% for validation error computation. However, this is for hyperparameter tuning, not a fixed train/test/validation split of a publicly available dataset for direct model evaluation and reproduction. The main evaluation uses synthetically generated data instances.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No We use CVXPY (Diamond & Boyd, 2016) to solve all the optimization problems, except for Sbl which is implemented via EM.
Experiment Setup Yes Data generation: In all the experiments, the entries of A and the non-zero values of β are sampled from N(0, 1). P2 is generated by randomly sampling from the family of s-sparse permutation matrices. The entries of w are independently sampled from N(0, σ2) where σ := fr the mean absolute value of the entries of the noiseless measurement vector P Aβ with fraction fr (0, 1). Choice of parameters: The regularization parameters λβ and λe in Ar-Lasso, R-Lasso, Lasso and ℓ1 ℓ1 algorithms are chosen through cross-validation on a held-out set of measurements. ... In our experiments, we observe that cross-validation overestimates (k, s) by a factor of 2. Hence, we directly set (k, s) to twice of their true value in the ℓ1-Htp and ℓ2-Htp algorithms. We select the learning rate in ℓ1-Htp and ℓ2-Htp through cross-validation. The number of iterations in ℓ1-Htp is set to 200, and that in ℓ2-Htp is set to 100. We always observed convergence within these iteration counts.