Learning to reconstruct signals from binary measurements alone

Authors: Julián Tachella, Laurent Jacques

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate in a series of experiments with real datasets that SSBM performs on par with supervised learning and outperforms sparse reconstruction methods with a fixed wavelet basis by a large margin.
Researcher Affiliation Academia Julián Tachella EMAIL Physics Laboratory CNRS & École Normale Supérieure de Lyon; Laurent Jacques EMAIL ICTEAM UCLouvain
Pseudocode No The paper describes learning algorithms in Section 4 using mathematical equations and prose, but does not present them in a structured pseudocode or algorithm block format.
Open Source Code No The paper does not provide any explicit statement about releasing source code, a link to a code repository, or mention of code in supplementary materials.
Open Datasets Yes We evaluate the theoretical bounds using the MNIST dataset, which consists of greyscale images with n = 784 pixels and whose box-counting dimension is approximately k 12 (Hein & Audibert, 2005). ... In order to demonstrate the robustness of the proposed method across datasets, we evaluate the proposed unsupervised approach on the Fashion MNIST (Xiao et al., 2017), Celeb A (Liu et al., 2015) and Flowers (Nilsback & Zisserman, 2008) datasets.
Dataset Splits Yes We use 6 104 images for training and 103 for testing. ... The Fashion MNIST dataset consists of 6 104 greyscale images with 28 28 pixels which are divided across G = 10 different forward operators. As with MNIST, we use N = 6 103 per operator for training and 103 per operator for testing. For the Celeb A dataset, we use G = 10 forward operators and choose a subset of 103 images for each operator for training and another subset of the same amount for testing. The Flowers dataset consists of 6149 color images for training and 1020 images for testing, all associated with the same forward operator.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using a U-Net network and the Adam optimizer, but does not specify any software names with version numbers (e.g., Python version, PyTorch version).
Experiment Setup Yes We choose fθ(y, A) = fθ(A y) where fθ is the U-Net network used in (Chen et al., 2021) with weights θ, and train for 400 epochs with the Adam optimizer with learning rate 10 4 and standard hyperparameters β1 = 0.9 and β2 = 0.99. ... Thus, we set α = 0.1 for m < n and α = 0.06 for m n.