DEALing with Image Reconstruction: Deep Attentive Least Squares

Authors: Mehrsa Pourya, Erich Kobler, Michael Unser, Sebastian Neumayer

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluations show results on par with state-of-the-art methods for various inverse problems. In Section 5, Experiments are conducted, including 'Grayscale and Color Denoising', 'Color Superresolution', and 'MRI Reconstruction', presenting results in tables (e.g., Table 1, 2, 3, 5) and figures (e.g., Figure 2, 4, 8, 9), discussing metrics like PSNR and SSIM, and using 'Dataset and Loss' for training.
Researcher Affiliation Academia 1Biomedical Imaging group, EPFL Lausanne, Switzerland 2Institute for Machine Learning, JKU Linz, Austria 3Faculty of Mathematics, TU Chemnitz, Germany.
Pseudocode No The paper describes its methodology in Section 3, 'Methodology', using prose, equations, and architectural diagrams (e.g., Figure 1). There are no explicitly labeled pseudocode or algorithm blocks, nor any structured procedures formatted like code.
Open Source Code Yes Code: https://github.com/mehrsapo/DEAL.
Open Datasets Yes As training set D = {xm}M m=1, we use the images proposed in Zhang et al. (2022). ... We provide in Table 1 the average peak signal-to-noise ratios (PSNR) achieved by various methods over the images of the BSD68 set and the CBSD68 set ... knee images from the fast MRI dataset (Knoll et al., 2020)
Dataset Splits Yes As training set D = {xm}M m=1, we use the images proposed in Zhang et al. (2022). ... To estimate the parameters θ from the training data, we use the loss ... At each step of the optimizer, we sample 16 patches of size (128 128) randomly from D. ... We use the set3 and set12 datasets to validate the color and grayscale models, respectively.
Hardware Specification Yes We report the computation times for several methods on a Tesla V100-SXM2-32GB GPU. ... We perform our experiments on a Tesla V100-SXM2-32 GB GPU.
Software Dependencies No The paper mentions using 'Adam (Kingma & Ba, 2015)' as an optimizer. However, it does not provide specific software library names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x, Python 3.8).
Experiment Setup Yes We set ϵout = ϵin = 1 10 4 and limit the number of CG steps to Kin = 50. ... First, we train the gray and color models for 70 000 and 40 000 steps, respectively, with an initial learning rate of 5 10 4 that is reduced to 4 10 4 by a cosine annealing scheduler. Then, we continue the training of the gray and color model for 10 000 and 5000 steps, respectively, with an initial learning rate of 2 10 4 that is reduced to 1 10 7 by annealing. ... To promote convergence of (6) to a fixed point, we sample Kout uniformly from [15, 60] ... We minimize the loss (12) ... with γ = 10 4.