Scalable Deep Compressive Sensing

Authors: Zhonghao Zhang, Yipeng Liu, Xingyu Cao, Fei Wen, Ce Zhu

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that models with SDCS can achieve SSR without changing their structure while maintaining good performance, and SDCS outperforms other SSR methods. [...] Table 1 and Table 2 show the average PSNR and SSIM of 12 models tested on Set11 and the testing set of BSDS500 at different CS ratios respectively.
Researcher Affiliation Collaboration Zhonghao Zhang EMAIL School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China. [...] Xingyu Cao EMAIL Alibaba DAMO Academy. [...] Fei Wen EMAIL Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China.
Pseudocode Yes Algorithm 1 Scalable training of one epoch. Input: training set T, batch size B, loss function L, max CS ratio RM, sampling matrix A, initialization matrix B, reconstruction model Ftra( ; Θ) or Funf( ; A, Θ). Output: trained parameters. 1: T 2: repeat 3: Select S = {X1, X2, , XB} T \ T . 4: T T S. 5: Generate {R1, R2, , RB} randomly, where Ri [1, RM]. 6: Generate {M1, M2, , MB}, where Mi(1 : Ri N , :) = 1 and Mi( Ri N + 1 : RMN , :) = 0. 7: Generate AS = {AS1, AS2, , ASB}, AR = {AR1, AR2, , ARB} and BR = {BR1, BR2, , BRB}, where ASi = Mi A, ARi = Mi A and BRi = MT i B. 8: for i = 1 : B do 9: yi = ASi vec(Xi) 10: X0 i = vec 1(BRiyi) 11: ˆXi = Ftra(X0 i ; Θ) or ˆXi = Funf(X0 i , yi; ARi, Θ) 12: Compute loss L using { ˆX1, ˆX2, , ˆXB} and S. 13: Update A, B and Θ. 14: until T \ T = 15: return A, B, Θ.
Open Source Code No The paper does not contain any explicit statement about releasing code, a link to a code repository, or mention of code in supplementary materials.
Open Datasets Yes All of our experiments are performed on two datasets: BSDS500 (Arbelaez et al., 2010) and Set11 (Lohit et al., 2018a). BSDS500 contains 500 colorful visual images and is composed of a training set (200 images), a validation set (100 images) and a test set (200 images).
Dataset Splits Yes BSDS500 contains 500 colorful visual images and is composed of a training set (200 images), a validation set (100 images) and a test set (200 images). [...] We generate two training sets for models with and without trainable deblocking operations. (a) Training set 1 contains 89600 sub-images sized of 99 99 which are randomly extracted from the luminance components of images in the training set of BSDS500 (Shi et al., 2019a). (b) Training set 2 contains 195200 sub-images sized of 33 33 which are randomly extracted from the luminance components of images in the training set of BSDS500 (Zhang & Ghanem, 2018).
Hardware Specification Yes All experiments are performed on a computer with an AMD Ryzen7 2700X CPU and an RTX2080Ti GPU.
Software Dependencies No The paper mentions algorithms like Adam optimizer but does not specify any software names with version numbers for implementation, programming languages, or libraries.
Experiment Setup Yes In this paper, the model combined with SDCS is named as model-SDCS. [...] RM is 50% and RVG is {1%, 4%, 10%, 25%, 30%, 40%, 50%}. [...] All sampling matrices are initialized randomly in Gaussian distribution. [...] In detail, additive Gaussian white noises (Lepskii, 1991) are added to y of all datasets to train and test models in the subsection. And the signal-to-noise ratios (SNRs) are 40d B, 30d B, 25d B and 15d B.